Message ID | 20210824233407.1845924-1-dmatlack@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: x86/mmu: Refactor slot null check in kvm_mmu_hugepage_adjust | expand |
On Tue, Aug 24, 2021, David Matlack wrote: > The current code is correct but relies on is_error_noslot_pfn() to > ensure slot is not null. The only reason is_error_noslot_pfn() was > checked instead is because we did not have the slot before > commit 6574422f913e ("KVM: x86/mmu: Pass the memslot around via struct > kvm_page_fault") and looking up the memslot is expensive. > > Now that the slot is available, explicitly check if it's null and > get rid of the redundant is_error_noslot_pfn() check. > > Suggested-by: Sean Christopherson <seanjc@google.com> > Signed-off-by: David Matlack <dmatlack@google.com> > --- Reviewed-by: Sean Christopherson <seanjc@google.com>
On 25/08/21 01:34, David Matlack wrote: > The current code is correct but relies on is_error_noslot_pfn() to > ensure slot is not null. The only reason is_error_noslot_pfn() was > checked instead is because we did not have the slot before > commit 6574422f913e ("KVM: x86/mmu: Pass the memslot around via struct > kvm_page_fault") and looking up the memslot is expensive. > > Now that the slot is available, explicitly check if it's null and > get rid of the redundant is_error_noslot_pfn() check. > > Suggested-by: Sean Christopherson <seanjc@google.com> > Signed-off-by: David Matlack <dmatlack@google.com> > --- > arch/x86/kvm/mmu/mmu.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 4853c033e6ce..9b5424bcb173 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -2925,10 +2925,10 @@ void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault > if (unlikely(fault->max_level == PG_LEVEL_4K)) > return; > > - if (is_error_noslot_pfn(fault->pfn) || kvm_is_reserved_pfn(fault->pfn)) > + if (!slot || kvm_slot_dirty_track_enabled(slot)) > return; > > - if (kvm_slot_dirty_track_enabled(slot)) > + if (kvm_is_reserved_pfn(fault->pfn)) > return; > > /* > Squashed, thanks. Paolo
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 4853c033e6ce..9b5424bcb173 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2925,10 +2925,10 @@ void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault if (unlikely(fault->max_level == PG_LEVEL_4K)) return; - if (is_error_noslot_pfn(fault->pfn) || kvm_is_reserved_pfn(fault->pfn)) + if (!slot || kvm_slot_dirty_track_enabled(slot)) return; - if (kvm_slot_dirty_track_enabled(slot)) + if (kvm_is_reserved_pfn(fault->pfn)) return; /*
The current code is correct but relies on is_error_noslot_pfn() to ensure slot is not null. The only reason is_error_noslot_pfn() was checked instead is because we did not have the slot before commit 6574422f913e ("KVM: x86/mmu: Pass the memslot around via struct kvm_page_fault") and looking up the memslot is expensive. Now that the slot is available, explicitly check if it's null and get rid of the redundant is_error_noslot_pfn() check. Suggested-by: Sean Christopherson <seanjc@google.com> Signed-off-by: David Matlack <dmatlack@google.com> --- arch/x86/kvm/mmu/mmu.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)