Message ID | 20230206165851.3106338-12-ricarkol@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Implement Eager Page Splitting for ARM. | expand |
Hi Ricardo, On 2/7/23 3:58 AM, Ricardo Koller wrote: > This is the arm64 counterpart of commit cb00a70bd4b7 ("KVM: x86/mmu: > Split huge pages mapped by the TDP MMU during KVM_CLEAR_DIRTY_LOG"), > which has the benefit of splitting the cost of splitting a memslot > across multiple ioctls. > > Split huge pages on the range specified using KVM_CLEAR_DIRTY_LOG. > And do not split when enabling dirty logging if > KVM_DIRTY_LOG_INITIALLY_SET is set. > > Signed-off-by: Ricardo Koller <ricarkol@google.com> > --- > arch/arm64/kvm/mmu.c | 15 ++++++++++++--- > 1 file changed, 12 insertions(+), 3 deletions(-) > > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c > index f6fb2bdaab71..da2fbd04fb01 100644 > --- a/arch/arm64/kvm/mmu.c > +++ b/arch/arm64/kvm/mmu.c > @@ -1084,8 +1084,8 @@ static void kvm_mmu_split_memory_region(struct kvm *kvm, int slot) > * @mask: The mask of pages at offset 'gfn_offset' in this memory > * slot to enable dirty logging on > * > - * Writes protect selected pages to enable dirty logging for them. Caller must > - * acquire kvm->mmu_lock. > + * Splits selected pages to PAGE_SIZE and then writes protect them to enable > + * dirty logging for them. Caller must acquire kvm->mmu_lock. > */ > void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, > struct kvm_memory_slot *slot, > @@ -1098,6 +1098,13 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, > lockdep_assert_held_write(&kvm->mmu_lock); > > stage2_wp_range(&kvm->arch.mmu, start, end); > + > + /* > + * If initially-all-set mode is not set, then huge-pages were already > + * split when enabling dirty logging: no need to do it again. > + */ > + if (kvm_dirty_log_manual_protect_and_init_set(kvm)) > + kvm_mmu_split_huge_pages(kvm, start, end); > } > > static void kvm_send_hwpoison_signal(unsigned long address, short lsb) > @@ -1884,7 +1891,9 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, > * this when deleting, moving, disabling dirty logging, or > * creating the memslot (a nop). Doing it for deletes makes > * sure we don't leak memory, and there's no need to keep the > - * cache around for any of the other cases. > + * cache around for any of the other cases. Keeping the cache > + * is useful for succesive KVM_CLEAR_DIRTY_LOG calls, which is > + * not handled in this function. > */ > kvm_mmu_free_memory_cache(&kvm->arch.mmu.split_page_cache); > } > s/succesive/successive Thanks, Gavin
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index f6fb2bdaab71..da2fbd04fb01 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1084,8 +1084,8 @@ static void kvm_mmu_split_memory_region(struct kvm *kvm, int slot) * @mask: The mask of pages at offset 'gfn_offset' in this memory * slot to enable dirty logging on * - * Writes protect selected pages to enable dirty logging for them. Caller must - * acquire kvm->mmu_lock. + * Splits selected pages to PAGE_SIZE and then writes protect them to enable + * dirty logging for them. Caller must acquire kvm->mmu_lock. */ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, struct kvm_memory_slot *slot, @@ -1098,6 +1098,13 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, lockdep_assert_held_write(&kvm->mmu_lock); stage2_wp_range(&kvm->arch.mmu, start, end); + + /* + * If initially-all-set mode is not set, then huge-pages were already + * split when enabling dirty logging: no need to do it again. + */ + if (kvm_dirty_log_manual_protect_and_init_set(kvm)) + kvm_mmu_split_huge_pages(kvm, start, end); } static void kvm_send_hwpoison_signal(unsigned long address, short lsb) @@ -1884,7 +1891,9 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, * this when deleting, moving, disabling dirty logging, or * creating the memslot (a nop). Doing it for deletes makes * sure we don't leak memory, and there's no need to keep the - * cache around for any of the other cases. + * cache around for any of the other cases. Keeping the cache + * is useful for succesive KVM_CLEAR_DIRTY_LOG calls, which is + * not handled in this function. */ kvm_mmu_free_memory_cache(&kvm->arch.mmu.split_page_cache); }
This is the arm64 counterpart of commit cb00a70bd4b7 ("KVM: x86/mmu: Split huge pages mapped by the TDP MMU during KVM_CLEAR_DIRTY_LOG"), which has the benefit of splitting the cost of splitting a memslot across multiple ioctls. Split huge pages on the range specified using KVM_CLEAR_DIRTY_LOG. And do not split when enabling dirty logging if KVM_DIRTY_LOG_INITIALLY_SET is set. Signed-off-by: Ricardo Koller <ricarkol@google.com> --- arch/arm64/kvm/mmu.c | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-)