Message ID | 20220723024316.2725328-1-junaids@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | kvm: x86: mmu: Drop the need_remote_flush() function | expand |
On Fri, Jul 22, 2022 at 07:43:16PM -0700, Junaid Shahid wrote: > This is only used by kvm_mmu_pte_write(), which no longer actually > creates the new SPTE and instead just clears the old SPTE. So we > just need to check if the old SPTE was shadow-present instead of > calling need_remote_flush(). Hence we can drop this function. It was > incomplete anyway as it didn't take access-tracking into account. > > This patch should not result in any functional change. Even if we don't assume anything about the new SPTE, this commit flushes TLBs in a superset of the current cases. So this change should definitely be safe. And then if we assume new SPTE is 0 (which it should be), I agree this should not result in any functional change. Reviewed-by: David Matlack <dmatlack@google.com> > > Signed-off-by: Junaid Shahid <junaids@google.com> > --- > arch/x86/kvm/mmu/mmu.c | 15 +-------------- > 1 file changed, 1 insertion(+), 14 deletions(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index f0d7193db455..39959841beee 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -5333,19 +5333,6 @@ void kvm_mmu_free_obsolete_roots(struct kvm_vcpu *vcpu) > __kvm_mmu_free_obsolete_roots(vcpu->kvm, &vcpu->arch.guest_mmu); > } > > -static bool need_remote_flush(u64 old, u64 new) > -{ > - if (!is_shadow_present_pte(old)) > - return false; > - if (!is_shadow_present_pte(new)) > - return true; > - if ((old ^ new) & SPTE_BASE_ADDR_MASK) > - return true; > - old ^= shadow_nx_mask; > - new ^= shadow_nx_mask; > - return (old & ~new & SPTE_PERM_MASK) != 0; > -} > - > static u64 mmu_pte_write_fetch_gpte(struct kvm_vcpu *vcpu, gpa_t *gpa, > int *bytes) > { > @@ -5491,7 +5478,7 @@ static void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa, > mmu_page_zap_pte(vcpu->kvm, sp, spte, NULL); > if (gentry && sp->role.level != PG_LEVEL_4K) > ++vcpu->kvm->stat.mmu_pde_zapped; > - if (need_remote_flush(entry, *spte)) > + if (is_shadow_present_pte(entry)) > flush = true; > ++spte; > } > > base-commit: a4850b5590d01bf3fb19fda3fc5d433f7382a974 > -- > 2.37.1.359.gd136c6c3e2-goog >
On Mon, Jul 25, 2022, David Matlack wrote: > On Fri, Jul 22, 2022 at 07:43:16PM -0700, Junaid Shahid wrote: > > This is only used by kvm_mmu_pte_write(), which no longer actually > > creates the new SPTE and instead just clears the old SPTE. So we > > just need to check if the old SPTE was shadow-present instead of > > calling need_remote_flush(). Hence we can drop this function. It was > > incomplete anyway as it didn't take access-tracking into account. > > > > This patch should not result in any functional change. > > Even if we don't assume anything about the new SPTE, this commit flushes > TLBs in a superset of the current cases. So this change should > definitely be safe. > > And then if we assume new SPTE is 0 (which it should be), I agree this > should not result in any functional change. Nit for posterity, zapped SPTEs don't necessarily have to be '0', e.g. KVM is more than likely going to use 0x80000000_00000000 as the "zero" value for TDP MMU SPTEs so that the SUPPRESS_VE is set for "zero"-initialized SPTEs (TDX requires EPT Violation #VEs be enabled). > Reviewed-by: David Matlack <dmatlack@google.com> Reviewed-by: Sean Christopherson <seanjc@google.com>
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f0d7193db455..39959841beee 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5333,19 +5333,6 @@ void kvm_mmu_free_obsolete_roots(struct kvm_vcpu *vcpu) __kvm_mmu_free_obsolete_roots(vcpu->kvm, &vcpu->arch.guest_mmu); } -static bool need_remote_flush(u64 old, u64 new) -{ - if (!is_shadow_present_pte(old)) - return false; - if (!is_shadow_present_pte(new)) - return true; - if ((old ^ new) & SPTE_BASE_ADDR_MASK) - return true; - old ^= shadow_nx_mask; - new ^= shadow_nx_mask; - return (old & ~new & SPTE_PERM_MASK) != 0; -} - static u64 mmu_pte_write_fetch_gpte(struct kvm_vcpu *vcpu, gpa_t *gpa, int *bytes) { @@ -5491,7 +5478,7 @@ static void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa, mmu_page_zap_pte(vcpu->kvm, sp, spte, NULL); if (gentry && sp->role.level != PG_LEVEL_4K) ++vcpu->kvm->stat.mmu_pde_zapped; - if (need_remote_flush(entry, *spte)) + if (is_shadow_present_pte(entry)) flush = true; ++spte; }
This is only used by kvm_mmu_pte_write(), which no longer actually creates the new SPTE and instead just clears the old SPTE. So we just need to check if the old SPTE was shadow-present instead of calling need_remote_flush(). Hence we can drop this function. It was incomplete anyway as it didn't take access-tracking into account. This patch should not result in any functional change. Signed-off-by: Junaid Shahid <junaids@google.com> --- arch/x86/kvm/mmu/mmu.c | 15 +-------------- 1 file changed, 1 insertion(+), 14 deletions(-) base-commit: a4850b5590d01bf3fb19fda3fc5d433f7382a974