diff mbox series

[05/16] mm/shmem.c: fix judgment error in shmem_is_huge()

Message ID 20210924224332.489iilWzd%akpm@linux-foundation.org (mailing list archive)
State New
Headers show
Series [01/16] mm, hwpoison: add is_free_buddy_page() in HWPoisonHandlable() | expand

Commit Message

Andrew Morton Sept. 24, 2021, 10:43 p.m. UTC
From: Liu Yuntao <liuyuntao10@huawei.com>
Subject: mm/shmem.c: fix judgment error in shmem_is_huge()

In the case of SHMEM_HUGE_WITHIN_SIZE, the page index is not rounded up
correctly.  When the page index points to the first page in a huge page,
round_up() cannot bring it to the end of the huge page, but to the end of
the previous one.

An example:

HPAGE_PMD_NR on my machine is 512(2 MB huge page size).  After allcoating
a 3000 KB buffer, I access it at location 2050 KB.  In shmem_is_huge(),
the corresponding index happens to be 512.  After rounded up by
HPAGE_PMD_NR, it will still be 512 which is smaller than i_size, and
shmem_is_huge() will return true.  As a result, my buffer takes an
additional huge page, and that shouldn't happen when shmem_enabled is set
to within_size.

Link: https://lkml.kernel.org/r/20210909032007.18353-1-liuyuntao10@huawei.com
Fixes: f3f0e1d2150b2b ("khugepaged: add support of collapse for tmpfs/shmem pages")
Signed-off-by: Liu Yuntao <liuyuntao10@huawei.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Hugh Dickins <hughd@google.com>
Cc: wuxu.wu <wuxu.wu@huawei.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/shmem.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
diff mbox series

Patch

--- a/mm/shmem.c~fix-judgment-error-in-shmem_is_huge
+++ a/mm/shmem.c
@@ -490,9 +490,9 @@  bool shmem_is_huge(struct vm_area_struct
 	case SHMEM_HUGE_ALWAYS:
 		return true;
 	case SHMEM_HUGE_WITHIN_SIZE:
-		index = round_up(index, HPAGE_PMD_NR);
+		index = round_up(index + 1, HPAGE_PMD_NR);
 		i_size = round_up(i_size_read(inode), PAGE_SIZE);
-		if (i_size >= HPAGE_PMD_SIZE && (i_size >> PAGE_SHIFT) >= index)
+		if (i_size >> PAGE_SHIFT >= index)
 			return true;
 		fallthrough;
 	case SHMEM_HUGE_ADVISE: