diff mbox series

[2/6] mm: Convert __do_fault() to use a folio

Message ID 20231108182809.602073-3-willy@infradead.org (mailing list archive)
State New
Headers show
Series Fix fault handler's handling of poisoned tail pages | expand

Commit Message

Matthew Wilcox Nov. 8, 2023, 6:28 p.m. UTC
Convert vmf->page to a folio as soon as we're going to use it.  This fixes
a bug if the fault handler returns a tail page with hardware poison;
tail pages have an invalid page->index, so we would fail to unmap the
page from the page tables.  We actually have to unmap the entire folio (or
mapping_evict_folio() will fail), so use unmap_mapping_folio() instead.

This also saves various calls to compound_head() hidden in lock_page(),
put_page(), etc.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Fixes: 793917d997df ("mm/readahead: Add large folio readahead")
Cc: stable@vger.kernel.org
---
 mm/memory.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

Comments

Andrew Morton Nov. 8, 2023, 9:07 p.m. UTC | #1
On Wed,  8 Nov 2023 18:28:05 +0000 "Matthew Wilcox (Oracle)" <willy@infradead.org> wrote:

> Convert vmf->page to a folio as soon as we're going to use it.  This fixes
> a bug if the fault handler returns a tail page with hardware poison;
> tail pages have an invalid page->index, so we would fail to unmap the
> page from the page tables.  We actually have to unmap the entire folio (or
> mapping_evict_folio() will

Would we merely fail to unmap or is there a possibility of unmapping
some random innocent other page?

How might this bug manifest in userspace, worst case?

> fail), so use unmap_mapping_folio() instead.
> 
> This also saves various calls to compound_head() hidden in lock_page(),
> put_page(), etc.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Fixes: 793917d997df ("mm/readahead: Add large folio readahead")
> Cc: stable@vger.kernel.org

As it's cc:stable I'll pluck this patch out of the rest of the series
and shall stage it for 6.7-rcX, via mm-hotfixes-unstable ->
mm-hotfixes-stable -> Linus.  Unless this bug is a very minor thing?
Matthew Wilcox Nov. 8, 2023, 9:48 p.m. UTC | #2
On Wed, Nov 08, 2023 at 01:07:51PM -0800, Andrew Morton wrote:
> On Wed,  8 Nov 2023 18:28:05 +0000 "Matthew Wilcox (Oracle)" <willy@infradead.org> wrote:
> 
> > Convert vmf->page to a folio as soon as we're going to use it.  This fixes
> > a bug if the fault handler returns a tail page with hardware poison;
> > tail pages have an invalid page->index, so we would fail to unmap the
> > page from the page tables.  We actually have to unmap the entire folio (or
> > mapping_evict_folio() will
> 
> Would we merely fail to unmap or is there a possibility of unmapping
> some random innocent other page?
> 
> How might this bug manifest in userspace, worst case?

I think we might unmap a random other page in this file.  But then the
next fault on that page will bring it back in, so it's only going to be
a tiny performance blip.  And we've just found a hwpoisoned page which
is going to cause all kinds of other excitement, so I doubt it'll be
noticed in the grand scheme of things.

> As it's cc:stable I'll pluck this patch out of the rest of the series
> and shall stage it for 6.7-rcX, via mm-hotfixes-unstable ->
> mm-hotfixes-stable -> Linus.  Unless this bug is a very minor thing?

I think it's minor enough that it can wait for 6.8.  Unless anyone
disagrees?
diff mbox series

Patch

diff --git a/mm/memory.c b/mm/memory.c
index 1f18ed4a5497..c2ee303ba6b3 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4239,6 +4239,7 @@  static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
 static vm_fault_t __do_fault(struct vm_fault *vmf)
 {
 	struct vm_area_struct *vma = vmf->vma;
+	struct folio *folio;
 	vm_fault_t ret;
 
 	/*
@@ -4267,27 +4268,26 @@  static vm_fault_t __do_fault(struct vm_fault *vmf)
 			    VM_FAULT_DONE_COW)))
 		return ret;
 
+	folio = page_folio(vmf->page);
 	if (unlikely(PageHWPoison(vmf->page))) {
-		struct page *page = vmf->page;
 		vm_fault_t poisonret = VM_FAULT_HWPOISON;
 		if (ret & VM_FAULT_LOCKED) {
-			if (page_mapped(page))
-				unmap_mapping_pages(page_mapping(page),
-						    page->index, 1, false);
-			/* Retry if a clean page was removed from the cache. */
-			if (invalidate_inode_page(page))
+			if (page_mapped(vmf->page))
+				unmap_mapping_folio(folio);
+			/* Retry if a clean folio was removed from the cache. */
+			if (mapping_evict_folio(folio->mapping, folio))
 				poisonret = VM_FAULT_NOPAGE;
-			unlock_page(page);
+			folio_unlock(folio);
 		}
-		put_page(page);
+		folio_put(folio);
 		vmf->page = NULL;
 		return poisonret;
 	}
 
 	if (unlikely(!(ret & VM_FAULT_LOCKED)))
-		lock_page(vmf->page);
+		folio_lock(folio);
 	else
-		VM_BUG_ON_PAGE(!PageLocked(vmf->page), vmf->page);
+		VM_BUG_ON_PAGE(!folio_test_locked(folio), vmf->page);
 
 	return ret;
 }