Message ID | 20250207030810.1701-1-yan.y.zhao@intel.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Small changes related to prefetch and spurious faults | expand |
On Fri, Feb 07, 2025, Yan Zhao wrote: > Merge the prefetch check into the is_access_allowed() check to determine a > spurious fault. > > In the TDP MMU, a spurious prefetch fault should also pass the > is_access_allowed() check. How so? 1. vCPU takes a write-fault on a swapped out page and queues an async #PF 2. A different task installs a writable SPTE 3. A third task write-protects the SPTE for dirty logging 4. Async #PF handler faults in the SPTE, encounters a read-only SPTE for its write fault. KVM shouldn't mark the gfn as dirty in this case.
On Fri, Feb 07, 2025 at 07:03:46AM -0800, Sean Christopherson wrote: > On Fri, Feb 07, 2025, Yan Zhao wrote: > > Merge the prefetch check into the is_access_allowed() check to determine a > > spurious fault. > > > > In the TDP MMU, a spurious prefetch fault should also pass the > > is_access_allowed() check. > > How so? > > 1. vCPU takes a write-fault on a swapped out page and queues an async #PF > 2. A different task installs a writable SPTE > 3. A third task write-protects the SPTE for dirty logging > 4. Async #PF handler faults in the SPTE, encounters a read-only SPTE for its > write fault. > > KVM shouldn't mark the gfn as dirty in this case. Hmm, but when we prefetch an entry, if a gfn is not write-tracked, it allows to mark the gfn as dirty, just like when there's no existing SPTE, a prefetch fault also marks a gfn as dirty. If a gfn is write-tracked, make_spte() will not grant write-permission to make the gfn dirty. However, I admit that making the new SPTE as not-accessed again is not desired. What about below? @@ -983,7 +983,7 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, return RET_PF_RETRY; if (is_shadow_present_pte(iter->old_spte) && - is_access_allowed(fault, iter->old_spte) && + (fault->prefetch || is_access_allowed(fault, iter->old_spte)) && is_last_spte(iter->old_spte, iter->level)) return RET_PF_SPURIOUS;
On Sat, Feb 08, 2025, Yan Zhao wrote: > On Fri, Feb 07, 2025 at 07:03:46AM -0800, Sean Christopherson wrote: > > On Fri, Feb 07, 2025, Yan Zhao wrote: > > > Merge the prefetch check into the is_access_allowed() check to determine a > > > spurious fault. > > > > > > In the TDP MMU, a spurious prefetch fault should also pass the > > > is_access_allowed() check. > > > > How so? > > > > 1. vCPU takes a write-fault on a swapped out page and queues an async #PF > > 2. A different task installs a writable SPTE > > 3. A third task write-protects the SPTE for dirty logging > > 4. Async #PF handler faults in the SPTE, encounters a read-only SPTE for its > > write fault. > > > > KVM shouldn't mark the gfn as dirty in this case. > Hmm, but when we prefetch an entry, if a gfn is not write-tracked, it allows to > mark the gfn as dirty, just like when there's no existing SPTE, a prefetch fault > also marks a gfn as dirty. Yeah, but there's a difference between installing a SPTE and overwriting a SPTE. > If a gfn is write-tracked, make_spte() will not grant write-permission to make > the gfn dirty. > > However, I admit that making the new SPTE as not-accessed again is not desired. > What about below? > > @@ -983,7 +983,7 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, > return RET_PF_RETRY; > > if (is_shadow_present_pte(iter->old_spte) && > - is_access_allowed(fault, iter->old_spte) && > + (fault->prefetch || is_access_allowed(fault, iter->old_spte)) && > is_last_spte(iter->old_spte, iter->level)) > return RET_PF_SPURIOUS; Works for me.
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index ab65fd915ef2..5f9e7374220e 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1137,10 +1137,6 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, if (WARN_ON_ONCE(sp->role.level != fault->goal_level)) return RET_PF_RETRY; - if (fault->prefetch && is_shadow_present_pte(iter->old_spte) && - is_last_spte(iter->old_spte, iter->level)) - return RET_PF_SPURIOUS; - if (is_shadow_present_pte(iter->old_spte) && is_access_allowed(fault, iter->old_spte) && is_last_spte(iter->old_spte, iter->level))
Merge the prefetch check into the is_access_allowed() check to determine a spurious fault. In the TDP MMU, a spurious prefetch fault should also pass the is_access_allowed() check. Combining these checks to avoid redundancy. Signed-off-by: Yan Zhao <yan.y.zhao@intel.com> --- arch/x86/kvm/mmu/tdp_mmu.c | 4 ---- 1 file changed, 4 deletions(-)