Message ID | 20210112181041.356734-10-bgardon@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Allow parallel page faults with TDP MMU | expand |
On Tue, Jan 12, 2021, Ben Gardon wrote: > The KVM MMU caches already guarantee that shadow page table memory will > be zeroed, so there is no reason to re-zero the page in the TDP MMU page > fault handler. > > No functional change intended. > > Reviewed-by: Peter Feiner <pfeiner@google.com> > > Signed-off-by: Ben Gardon <bgardon@google.com> Reviewed-by: Sean Christopherson <seanjc@google.com>
On 12/01/21 19:10, Ben Gardon wrote: > The KVM MMU caches already guarantee that shadow page table memory will > be zeroed, so there is no reason to re-zero the page in the TDP MMU page > fault handler. > > No functional change intended. > > Reviewed-by: Peter Feiner <pfeiner@google.com> > > Signed-off-by: Ben Gardon <bgardon@google.com> > --- > arch/x86/kvm/mmu/tdp_mmu.c | 1 - > 1 file changed, 1 deletion(-) > > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c > index 411938e97a00..55df596696c7 100644 > --- a/arch/x86/kvm/mmu/tdp_mmu.c > +++ b/arch/x86/kvm/mmu/tdp_mmu.c > @@ -665,7 +665,6 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, > sp = alloc_tdp_mmu_page(vcpu, iter.gfn, iter.level); > list_add(&sp->link, &vcpu->kvm->arch.tdp_mmu_pages); > child_pt = sp->spt; > - clear_page(child_pt); > new_spte = make_nonleaf_spte(child_pt, > !shadow_accessed_mask); > > Queued, thanks. Paolo
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 411938e97a00..55df596696c7 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -665,7 +665,6 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, sp = alloc_tdp_mmu_page(vcpu, iter.gfn, iter.level); list_add(&sp->link, &vcpu->kvm->arch.tdp_mmu_pages); child_pt = sp->spt; - clear_page(child_pt); new_spte = make_nonleaf_spte(child_pt, !shadow_accessed_mask);