diff mbox series

mm/khugepaged: fix filemap page_to_pgoff(page) != offset

Message ID alpine.LSU.2.11.2010091943300.13335@eggly.anvils (mailing list archive)
State New, archived
Headers show
Series mm/khugepaged: fix filemap page_to_pgoff(page) != offset | expand

Commit Message

Hugh Dickins Oct. 10, 2020, 3:07 a.m. UTC
There have been elusive reports of filemap_fault() hitting its
VM_BUG_ON_PAGE(page_to_pgoff(page) != offset, page) on kernels built
with CONFIG_READ_ONLY_THP_FOR_FS=y.

Suren has hit it on a kernel with CONFIG_READ_ONLY_THP_FOR_FS=y and
CONFIG_NUMA is not set: and he has analyzed it down to how khugepaged
without NUMA reuses the same huge page after collapse_file() failed
(whereas NUMA targets its allocation to the respective node each time).
And most of us were usually testing with CONFIG_NUMA=y kernels.

collapse_file(old start)
  new_page = khugepaged_alloc_page(hpage)
  __SetPageLocked(new_page)
  new_page->index = start // hpage->index=old offset
  new_page->mapping = mapping
  xas_store(&xas, new_page)

                          filemap_fault
                            page = find_get_page(mapping, offset)
                            // if offset falls inside hpage then
                            // compound_head(page) == hpage
                            lock_page_maybe_drop_mmap()
                              __lock_page(page)

  // collapse fails
  xas_store(&xas, old page)
  new_page->mapping = NULL
  unlock_page(new_page)

collapse_file(new start)
  new_page = khugepaged_alloc_page(hpage)
  __SetPageLocked(new_page)
  new_page->index = start // hpage->index=new offset
  new_page->mapping = mapping // mapping becomes valid again

                            // since compound_head(page) == hpage
                            // page_to_pgoff(page) got changed
                            VM_BUG_ON_PAGE(page_to_pgoff(page) != offset)

An initial patch replaced __SetPageLocked() by lock_page(), which did
fix the race which Suren illustrates above.  But testing showed that it's
not good enough: if the racing task's __lock_page() gets delayed long
after its find_get_page(), then it may follow collapse_file(new start)'s
successful final unlock_page(), and crash on the same VM_BUG_ON_PAGE.

It could be fixed by relaxing filemap_fault()'s VM_BUG_ON_PAGE to a
check and retry (as is done for mapping), with similar relaxations in
find_lock_entry() and pagecache_get_page(): but it's not obvious what
else might get caught out; and khugepaged non-NUMA appears to be unique
in exposing a page to page cache, then revoking, without going through
a full cycle of freeing before reuse.

Instead, non-NUMA khugepaged_prealloc_page() release the old page
if anyone else has a reference to it (1% of cases when I tested).

Although never reported on huge tmpfs, I believe its find_lock_entry()
has been at similar risk; but huge tmpfs does not rely on khugepaged
for its normal working nearly so much as READ_ONLY_THP_FOR_FS does.

Reported-by: Denis Lisov <dennis.lissov@gmail.com>
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=206569
Link: https://lore.kernel.org/linux-mm/?q=20200219144635.3b7417145de19b65f258c943%40linux-foundation.org
Reported-by: Qian Cai <cai@lca.pw>
Link: https://lore.kernel.org/linux-xfs/?q=20200616013309.GB815%40lca.pw
Reported-and-analyzed-by: Suren Baghdasaryan <surenb@google.com>
Fixes: 87c460a0bded ("mm/khugepaged: collapse_shmem() without freezing new_page")
Signed-off-by: Hugh Dickins <hughd@google.com>
Cc: stable@vger.kernel.org # v4.9+
---

 mm/khugepaged.c |   12 ++++++++++++
 1 file changed, 12 insertions(+)

Comments

Matthew Wilcox Oct. 10, 2020, 3:09 p.m. UTC | #1
On Fri, Oct 09, 2020 at 08:07:59PM -0700, Hugh Dickins wrote:
> There have been elusive reports of filemap_fault() hitting its
> VM_BUG_ON_PAGE(page_to_pgoff(page) != offset, page) on kernels built
> with CONFIG_READ_ONLY_THP_FOR_FS=y.
> 
> Suren has hit it on a kernel with CONFIG_READ_ONLY_THP_FOR_FS=y and
> CONFIG_NUMA is not set: and he has analyzed it down to how khugepaged
> without NUMA reuses the same huge page after collapse_file() failed
> (whereas NUMA targets its allocation to the respective node each time).
> And most of us were usually testing with CONFIG_NUMA=y kernels.

Good catch.  There have been at least three bugs in recent times which
can cause this VM_BUG_ON_PAGE() to trigger.  This one, one where swapping
out a THP led to all 512 entries pointing to the same non-huge page on
swapin (fixed in -mm) and one that I introduced for a few weeks in -mm
where failing to split a THP would lead to random tree corruption due
to a non-zeroed node being freed to the slab cache.

There may yet be a fourth.  I've seen it occasionally in recent testing
so I'll add this patch and see if it disappears.

> Instead, non-NUMA khugepaged_prealloc_page() release the old page
> if anyone else has a reference to it (1% of cases when I tested).

I think this is a good way to fix the problem.  We could also change
khugepaged to insert a frozen page, ensuring that find_get_entry()
would spin until the collapse has succeeded or the page was removed
from the cache again.  But I have no problem with this approach.

I want to note that this is a silent data corruption for reads.
generic_file_buffered_read() has a reference to the page, so this
patch will fix it, but before it could be copying the wrong data
to userspace.

Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
diff mbox series

Patch

--- 5.9-rc8/mm/khugepaged.c	2020-09-06 17:34:46.939306972 -0700
+++ linux/mm/khugepaged.c	2020-10-08 16:19:42.999765534 -0700
@@ -914,6 +914,18 @@  static struct page *khugepaged_alloc_hug
 
 static bool khugepaged_prealloc_page(struct page **hpage, bool *wait)
 {
+	/*
+	 * If the hpage allocated earlier was briefly exposed in page cache
+	 * before collapse_file() failed, it is possible that racing lookups
+	 * have not yet completed, and would then be unpleasantly surprised by
+	 * finding the hpage reused for the same mapping at a different offset.
+	 * Just release the previous allocation if there is any danger of that.
+	 */
+	if (*hpage && page_count(*hpage) > 1) {
+		put_page(*hpage);
+		*hpage = NULL;
+	}
+
 	if (!*hpage)
 		*hpage = khugepaged_alloc_hugepage(wait);