Message ID | 20210112181041.356734-25-bgardon@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Allow parallel page faults with TDP MMU | expand |
On Tue, Jan 12, 2021, Ben Gardon wrote: > Make the last few changes necessary to enable the TDP MMU to handle page > faults in parallel while holding the mmu_lock in read mode. > > Reviewed-by: Peter Feiner <pfeiner@google.com> > > Signed-off-by: Ben Gardon <bgardon@google.com> > --- > arch/x86/kvm/mmu/mmu.c | 12 ++++++++++-- > 1 file changed, 10 insertions(+), 2 deletions(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 280d7cd6f94b..fa111ceb67d4 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -3724,7 +3724,12 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, > return r; > > r = RET_PF_RETRY; > - kvm_mmu_lock(vcpu->kvm); > + > + if (is_tdp_mmu_root(vcpu->kvm, vcpu->arch.mmu->root_hpa)) Off topic, what do you think about rewriting is_tdp_mmu_root() to be both more performant and self-documenting as to when is_tdp_mmu_root() != kvm->arch.tdp_mmu_enabled? E.g. key off is_guest_mode() and then do a thorough audit/check when CONFIG_KVM_MMU_AUDIT=y? #ifdef CONFIG_KVM_MMU_AUDIT bool is_tdp_mmu_root(struct kvm *kvm, hpa_t hpa) { struct kvm_mmu_page *sp; if (!kvm->arch.tdp_mmu_enabled) return false; if (WARN_ON(!VALID_PAGE(hpa))) return false; sp = to_shadow_page(hpa); if (WARN_ON(!sp)) return false; return sp->tdp_mmu_page && sp->root_count; } #endif bool is_tdp_mmu(struct kvm_vcpu *vcpu) { bool is_tdp_mmu = kvm->arch.tdp_mmu_enabled && !is_guest_mode(vcpu); #ifdef CONFIG_KVM_MMU_AUDIT WARN_ON(is_tdp_mmu != is_tdp_mmu_root(vcpu->kvm, vcpu->arch.mmu->root_hpa)); #endif return is_tdp_mmu; } > + kvm_mmu_lock_shared(vcpu->kvm); > + else > + kvm_mmu_lock(vcpu->kvm); > + > if (mmu_notifier_retry(vcpu->kvm, mmu_seq)) > goto out_unlock; > r = make_mmu_pages_available(vcpu); > @@ -3739,7 +3744,10 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, > prefault, is_tdp); > > out_unlock: > - kvm_mmu_unlock(vcpu->kvm); > + if (is_tdp_mmu_root(vcpu->kvm, vcpu->arch.mmu->root_hpa)) > + kvm_mmu_unlock_shared(vcpu->kvm); > + else > + kvm_mmu_unlock(vcpu->kvm); > kvm_release_pfn_clean(pfn); > return r; > } > -- > 2.30.0.284.gd98b1dd5eaa7-goog >
On 12/01/21 19:10, Ben Gardon wrote: > + if (is_tdp_mmu_root(vcpu->kvm, vcpu->arch.mmu->root_hpa)) > + kvm_mmu_lock_shared(vcpu->kvm); > + else > + kvm_mmu_lock(vcpu->kvm); Perhaps the better API would be kvm_mmu_lock/unlock_root; not exposing kvm_mmu_lock/unlock_shared and kvm_mmu_lock/unlock_exclusive at all, just like you use rwlock_needbreak directly in kvm_mmu_lock_needbreak. Paolo
On Wed, Jan 20, 2021 at 4:56 PM Sean Christopherson <seanjc@google.com> wrote: > > On Tue, Jan 12, 2021, Ben Gardon wrote: > > Make the last few changes necessary to enable the TDP MMU to handle page > > faults in parallel while holding the mmu_lock in read mode. > > > > Reviewed-by: Peter Feiner <pfeiner@google.com> > > > > Signed-off-by: Ben Gardon <bgardon@google.com> > > --- > > arch/x86/kvm/mmu/mmu.c | 12 ++++++++++-- > > 1 file changed, 10 insertions(+), 2 deletions(-) > > > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > > index 280d7cd6f94b..fa111ceb67d4 100644 > > --- a/arch/x86/kvm/mmu/mmu.c > > +++ b/arch/x86/kvm/mmu/mmu.c > > @@ -3724,7 +3724,12 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, > > return r; > > > > r = RET_PF_RETRY; > > - kvm_mmu_lock(vcpu->kvm); > > + > > + if (is_tdp_mmu_root(vcpu->kvm, vcpu->arch.mmu->root_hpa)) > > Off topic, what do you think about rewriting is_tdp_mmu_root() to be both more > performant and self-documenting as to when is_tdp_mmu_root() != > kvm->arch.tdp_mmu_enabled? E.g. key off is_guest_mode() and then do a thorough > audit/check when CONFIG_KVM_MMU_AUDIT=y? > > #ifdef CONFIG_KVM_MMU_AUDIT > bool is_tdp_mmu_root(struct kvm *kvm, hpa_t hpa) > { > struct kvm_mmu_page *sp; > > if (!kvm->arch.tdp_mmu_enabled) > return false; > if (WARN_ON(!VALID_PAGE(hpa))) > return false; > > sp = to_shadow_page(hpa); > if (WARN_ON(!sp)) > return false; > > return sp->tdp_mmu_page && sp->root_count; > } > #endif > > bool is_tdp_mmu(struct kvm_vcpu *vcpu) > { > bool is_tdp_mmu = kvm->arch.tdp_mmu_enabled && !is_guest_mode(vcpu); > > #ifdef CONFIG_KVM_MMU_AUDIT > WARN_ON(is_tdp_mmu != is_tdp_mmu_root(vcpu->kvm, vcpu->arch.mmu->root_hpa)); > #endif > return is_tdp_mmu; > } Great suggestions. In the interest of keeping this (already enormous) series small, I'm inclined to make those changes in a future series if that's alright with you. > > > + kvm_mmu_lock_shared(vcpu->kvm); > > + else > > + kvm_mmu_lock(vcpu->kvm); > > + > > if (mmu_notifier_retry(vcpu->kvm, mmu_seq)) > > goto out_unlock; > > r = make_mmu_pages_available(vcpu); > > @@ -3739,7 +3744,10 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, > > prefault, is_tdp); > > > > out_unlock: > > - kvm_mmu_unlock(vcpu->kvm); > > + if (is_tdp_mmu_root(vcpu->kvm, vcpu->arch.mmu->root_hpa)) > > + kvm_mmu_unlock_shared(vcpu->kvm); > > + else > > + kvm_mmu_unlock(vcpu->kvm); > > kvm_release_pfn_clean(pfn); > > return r; > > } > > -- > > 2.30.0.284.gd98b1dd5eaa7-goog > >
On Tue, Jan 26, 2021, Ben Gardon wrote: > On Wed, Jan 20, 2021 at 4:56 PM Sean Christopherson <seanjc@google.com> wrote: > > > > On Tue, Jan 12, 2021, Ben Gardon wrote: > > > Make the last few changes necessary to enable the TDP MMU to handle page > > > faults in parallel while holding the mmu_lock in read mode. > > > > > > Reviewed-by: Peter Feiner <pfeiner@google.com> > > > > > > Signed-off-by: Ben Gardon <bgardon@google.com> > > > --- > > > arch/x86/kvm/mmu/mmu.c | 12 ++++++++++-- > > > 1 file changed, 10 insertions(+), 2 deletions(-) > > > > > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > > > index 280d7cd6f94b..fa111ceb67d4 100644 > > > --- a/arch/x86/kvm/mmu/mmu.c > > > +++ b/arch/x86/kvm/mmu/mmu.c > > > @@ -3724,7 +3724,12 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, > > > return r; > > > > > > r = RET_PF_RETRY; > > > - kvm_mmu_lock(vcpu->kvm); > > > + > > > + if (is_tdp_mmu_root(vcpu->kvm, vcpu->arch.mmu->root_hpa)) > > > > Off topic, what do you think about rewriting is_tdp_mmu_root() to be both more > > performant and self-documenting as to when is_tdp_mmu_root() != > > kvm->arch.tdp_mmu_enabled? E.g. key off is_guest_mode() and then do a thorough > > audit/check when CONFIG_KVM_MMU_AUDIT=y? > > > > #ifdef CONFIG_KVM_MMU_AUDIT > > bool is_tdp_mmu_root(struct kvm *kvm, hpa_t hpa) > > { > > struct kvm_mmu_page *sp; > > > > if (!kvm->arch.tdp_mmu_enabled) > > return false; > > if (WARN_ON(!VALID_PAGE(hpa))) > > return false; > > > > sp = to_shadow_page(hpa); > > if (WARN_ON(!sp)) > > return false; > > > > return sp->tdp_mmu_page && sp->root_count; > > } > > #endif > > > > bool is_tdp_mmu(struct kvm_vcpu *vcpu) > > { > > bool is_tdp_mmu = kvm->arch.tdp_mmu_enabled && !is_guest_mode(vcpu); > > > > #ifdef CONFIG_KVM_MMU_AUDIT > > WARN_ON(is_tdp_mmu != is_tdp_mmu_root(vcpu->kvm, vcpu->arch.mmu->root_hpa)); > > #endif > > return is_tdp_mmu; > > } > > Great suggestions. In the interest of keeping this (already enormous) > series small, I'm inclined to make those changes in a future series if > that's alright with you. Yep, definitely a different series.
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 280d7cd6f94b..fa111ceb67d4 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3724,7 +3724,12 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, return r; r = RET_PF_RETRY; - kvm_mmu_lock(vcpu->kvm); + + if (is_tdp_mmu_root(vcpu->kvm, vcpu->arch.mmu->root_hpa)) + kvm_mmu_lock_shared(vcpu->kvm); + else + kvm_mmu_lock(vcpu->kvm); + if (mmu_notifier_retry(vcpu->kvm, mmu_seq)) goto out_unlock; r = make_mmu_pages_available(vcpu); @@ -3739,7 +3744,10 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, prefault, is_tdp); out_unlock: - kvm_mmu_unlock(vcpu->kvm); + if (is_tdp_mmu_root(vcpu->kvm, vcpu->arch.mmu->root_hpa)) + kvm_mmu_unlock_shared(vcpu->kvm); + else + kvm_mmu_unlock(vcpu->kvm); kvm_release_pfn_clean(pfn); return r; }