@@ -746,6 +746,54 @@ static int kvm_mips_map_page(struct kvm_vcpu *vcpu, unsigned long gpa,
*/
spin_unlock(&kvm->mmu_lock);
kvm_release_pfn_clean(pfn);
+ /*
+ * Add a cond_resched() to give the scheduler a chance to run
+ * madvise task to avoid endless loop here in non-preemptible
+ * kernel.
+ * Otherwise, the kvm_mmu_notifier would have no chance to be
+ * decreased to 0 by madvise task -> syscall -> zap_page_range
+ * -> mmu_notifier_invalidate_range_end ->
+ * __mmu_notifier_invalidate_range_end -> invalidate_range_end
+ * -> kvm_mmu_notifier_invalidate_range_end, as the madvise task
+ * would be scheduled when running unmap_single_vma ->
+ * unmap_page_range -> zap_p4d_range -> zap_pud_range ->
+ * zap_pmd_range -> cond_resched which is called before
+ * mmu_notifier_invalidate_range_end in zap_page_range.
+ *
+ * When handling GPA faults by creating a new GPA mapping in
+ * kvm_mips_map_page, it will be retrying to get available
+ * pages.
+ * In the low memory case, it is waiting for the memory
+ * resources freed by madvise syscall with MADV_DONTNEED (QEMU
+ * application -> madvise with MADV_DONTNEED -> syscall ->
+ * madvise_vma -> madvise_dontneed_free ->
+ * madvise_dontneed_single_vma -> zap_page_range). In
+ * zap_page_range, after the TLB of given address range is
+ * cleared by unmap_single_vma, it will call
+ * __mmu_notifier_invalidate_range_end which finally calls
+ * kvm_mmu_notifier_invalidate_range_end to decrease
+ * mmu_notifier_count to 0. The retrying loop in
+ * kvm_mips_map_page checks the mmu_notifier_count and if the
+ * value is 0 which indicates that some new page is available
+ * for mapping, it will jump out the retrying loop and set up
+ * PTE for a new GPA mapping.
+ * During the TLB clearing (in unmap_single_vma in madvise
+ * syscall) mentioned above, it will call cond_resched() per
+ * PMD for avoiding occupying CPU for a long time (in case of
+ * huge page range zapping). When this happens in the
+ * non-preemptible kernel, the retrying loop in
+ * kvm_mips_map_page will be running endlessly as there is no
+ * chance to reschedule back to madvise syscall to run
+ * __mmu_notifier_invalidate_range_end to decrease
+ * mmu_notifier_count so that the value of mmu_notifier_count
+ * is always 1.
+ * Adding a scheduling point before every retry in
+ * kvm_mips_map_page will give the madvise syscall (invoked by
+ * QEMU) a chance to be re-scheduled back to zap pages in the
+ * given range and clear mmu_notifier_count value to let
+ * kvm_mips_map_page task jump out the loop.
+ */
+ cond_resched();
goto retry;
}
Add a cond_resched() to give the scheduler a chance to run madvise task to avoid endless loop here in non-preemptible kernel. Otherwise, the kvm_mmu_notifier would have no chance to be decreased to 0 by madvise task -> syscall -> zap_page_range -> mmu_notifier_invalidate_range_end -> __mmu_notifier_invalidate_range_end -> invalidate_range_end -> kvm_mmu_notifier_invalidate_range_end, as the madvise task would be scheduled when running unmap_single_vma -> unmap_page_range -> zap_p4d_range -> zap_pud_range -> zap_pmd_range -> cond_resched which is called before mmu_notifier_invalidate_range_end in zap_page_range. When handling GPA faults by creating a new GPA mapping in kvm_mips_map_page, it will be retrying to get available page. In the low memory case, it is waiting for the memory resources freed by madvise syscall with MADV_DONTNEED (QEMU application -> madvise with MADV_DONTNEED -> syscall -> madvise_vma -> madvise_dontneed_free -> madvise_dontneed_single_vma -> zap_page_range). In zap_page_range, after the TLB of given address range is cleared by unmap_single_vma, it will call __mmu_notifier_invalidate_range_end which finally calls kvm_mmu_notifier_invalidate_range_end to decrease mmu_notifier_count to 0. The retrying loop in kvm_mips_map_page checks the mmu_notifier_count and if the value is 0 which indicates that some new page is available for mapping, it will jump out the retrying loop and set up PTE for a new GPA mapping. During the TLB clearing (in unmap_single_vma in madvise syscall) mentioned above, it will call cond_resched() per PMD for avoiding occupying CPU for a long time (in case of huge page range zapping). When this happens in the non-preemptible kernel, the retrying loop in kvm_mips_map_page will be running endlessly as there is no chance to reschedule back to madvise syscall to run __mmu_notifier_invalidate_range_end to decrease mmu_notifier_count so that the value of mmu_notifier_count is always 1. Adding a scheduling point before every retry in kvm_mips_map_page will give the madvise syscall (invoked by QEMU) a chance to be re-scheduled back to zap pages in the given range and clear mmu_notifier_count value to let kvm_mips_map_page task jump out the loop. Signed-off-by: Gary Fu <qfu@wavecomp.com> --- arch/mips/kvm/mmu.c | 48 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 48 insertions(+)