Message ID | 20211102032900.1888262-1-junaids@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | kvm: mmu: Use fast PF path for access tracking of huge pages when possible | expand |
On Mon, Nov 1, 2021 at 8:30 PM Junaid Shahid <junaids@google.com> wrote: > > The fast page fault path bails out on write faults to huge pages in > order to accommodate dirty logging. This change adds a check to do that > only when dirty logging is actually enabled, so that access tracking for > huge pages can still use the fast path for write faults in the common > case. > > Signed-off-by: Junaid Shahid <junaids@google.com> Reviewed-by: Ben Gardon <bgardon@google.com> > --- > arch/x86/kvm/mmu/mmu.c | 8 +++++--- > 1 file changed, 5 insertions(+), 3 deletions(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 354d2ca92df4..5df9181c5082 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -3191,8 +3191,9 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) > new_spte |= PT_WRITABLE_MASK; > > /* > - * Do not fix write-permission on the large spte. Since > - * we only dirty the first page into the dirty-bitmap in > + * Do not fix write-permission on the large spte when > + * dirty logging is enabled. Since we only dirty the > + * first page into the dirty-bitmap in > * fast_pf_fix_direct_spte(), other pages are missed > * if its slot has dirty logging enabled. > * > @@ -3201,7 +3202,8 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) > * > * See the comments in kvm_arch_commit_memory_region(). > */ > - if (sp->role.level > PG_LEVEL_4K) > + if (sp->role.level > PG_LEVEL_4K && > + kvm_slot_dirty_track_enabled(fault->slot)) > break; > } > > -- > 2.33.1.1089.g2158813163f-goog >
On Mon, Nov 01, 2021, Junaid Shahid wrote: > The fast page fault path bails out on write faults to huge pages in > order to accommodate dirty logging. This change adds a check to do that > only when dirty logging is actually enabled, so that access tracking for > huge pages can still use the fast path for write faults in the common > case. > > Signed-off-by: Junaid Shahid <junaids@google.com> One nit, otherwise Reviewed-by: Sean Christopherson <seanjc@google.com> > --- > arch/x86/kvm/mmu/mmu.c | 8 +++++--- > 1 file changed, 5 insertions(+), 3 deletions(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 354d2ca92df4..5df9181c5082 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -3191,8 +3191,9 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) > new_spte |= PT_WRITABLE_MASK; > > /* > - * Do not fix write-permission on the large spte. Since > - * we only dirty the first page into the dirty-bitmap in > + * Do not fix write-permission on the large spte when > + * dirty logging is enabled. Since we only dirty the > + * first page into the dirty-bitmap in > * fast_pf_fix_direct_spte(), other pages are missed > * if its slot has dirty logging enabled. > * > @@ -3201,7 +3202,8 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) > * > * See the comments in kvm_arch_commit_memory_region(). This part is slightly stale as kvm_mmu_slot_apply_flags() now has the comments. Maybe just drop it entirely? The comments there don't do a whole lot to make this code more understandable. > */ > - if (sp->role.level > PG_LEVEL_4K) > + if (sp->role.level > PG_LEVEL_4K && > + kvm_slot_dirty_track_enabled(fault->slot)) > break; > } > > -- > 2.33.1.1089.g2158813163f-goog >
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 354d2ca92df4..5df9181c5082 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3191,8 +3191,9 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) new_spte |= PT_WRITABLE_MASK; /* - * Do not fix write-permission on the large spte. Since - * we only dirty the first page into the dirty-bitmap in + * Do not fix write-permission on the large spte when + * dirty logging is enabled. Since we only dirty the + * first page into the dirty-bitmap in * fast_pf_fix_direct_spte(), other pages are missed * if its slot has dirty logging enabled. * @@ -3201,7 +3202,8 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) * * See the comments in kvm_arch_commit_memory_region(). */ - if (sp->role.level > PG_LEVEL_4K) + if (sp->role.level > PG_LEVEL_4K && + kvm_slot_dirty_track_enabled(fault->slot)) break; }
The fast page fault path bails out on write faults to huge pages in order to accommodate dirty logging. This change adds a check to do that only when dirty logging is actually enabled, so that access tracking for huge pages can still use the fast path for write faults in the common case. Signed-off-by: Junaid Shahid <junaids@google.com> --- arch/x86/kvm/mmu/mmu.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)