diff mbox series

mm: Allow fault_dirty_shared_page() to be called under the VMA lock

Message ID 20230812002033.1002367-1-willy@infradead.org (mailing list archive)
State New
Headers show
Series mm: Allow fault_dirty_shared_page() to be called under the VMA lock | expand

Commit Message

Matthew Wilcox Aug. 12, 2023, 12:20 a.m. UTC
By making maybe_unlock_mmap_for_io() handle the VMA lock correctly,
we make fault_dirty_shared_page() safe to be called without the mmap
lock held.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reported-by: David Hildenbrand <david@redhat.com>
Tested-by: Suren Baghdasaryan <surenb@google.com>
---
Andrew, can you insert this before "mm: handle faults that merely update
the accessed bit under the VMA lock" please?  It could be handled as a fix
patch, but it actually stands on its own as a separate patch.  No big deal
if it has to go in after that patch.

 mm/internal.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
diff mbox series

Patch

diff --git a/mm/internal.h b/mm/internal.h
index 8611f7c5bd16..c7720e83cb3c 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -706,7 +706,7 @@  static inline struct file *maybe_unlock_mmap_for_io(struct vm_fault *vmf,
 	if (fault_flag_allow_retry_first(flags) &&
 	    !(flags & FAULT_FLAG_RETRY_NOWAIT)) {
 		fpin = get_file(vmf->vma->vm_file);
-		mmap_read_unlock(vmf->vma->vm_mm);
+		release_fault_lock(vmf);
 	}
 	return fpin;
 }