diff mbox series

[10/10] mm/khugepaged: fix the xas_create_range() error path

Message ID alpine.LSU.2.11.1811261531200.2275@eggly.anvils (mailing list archive)
State New, archived
Headers show
Series huge_memory,khugepaged tmpfs split/collapse fixes | expand

Commit Message

Hugh Dickins Nov. 26, 2018, 11:33 p.m. UTC
collapse_shmem()'s xas_nomem() is very unlikely to fail, but it is
rightly given a failure path, so move the whole xas_create_range() block
up before __SetPageLocked(new_page): so that it does not need to remember
to unlock_page(new_page).  Add the missing mem_cgroup_cancel_charge(),
and set (currently unused) result to SCAN_FAIL rather than SCAN_SUCCEED.

Fixes: 77da9389b9d5 ("mm: Convert collapse_shmem to XArray")
Signed-off-by: Hugh Dickins <hughd@kernel.org>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 mm/khugepaged.c | 25 ++++++++++++++-----------
 1 file changed, 14 insertions(+), 11 deletions(-)

Comments

Kirill A . Shutemov Nov. 27, 2018, 8:02 a.m. UTC | #1
On Mon, Nov 26, 2018 at 03:33:13PM -0800, Hugh Dickins wrote:
> collapse_shmem()'s xas_nomem() is very unlikely to fail, but it is
> rightly given a failure path, so move the whole xas_create_range() block
> up before __SetPageLocked(new_page): so that it does not need to remember
> to unlock_page(new_page).  Add the missing mem_cgroup_cancel_charge(),
> and set (currently unused) result to SCAN_FAIL rather than SCAN_SUCCEED.
> 
> Fixes: 77da9389b9d5 ("mm: Convert collapse_shmem to XArray")
> Signed-off-by: Hugh Dickins <hughd@kernel.org>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>

Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
diff mbox series

Patch

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 2c5fe4f7a0c6..8e2ff195ecb3 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1329,6 +1329,20 @@  static void collapse_shmem(struct mm_struct *mm,
 		goto out;
 	}
 
+	/* This will be less messy when we use multi-index entries */
+	do {
+		xas_lock_irq(&xas);
+		xas_create_range(&xas);
+		if (!xas_error(&xas))
+			break;
+		xas_unlock_irq(&xas);
+		if (!xas_nomem(&xas, GFP_KERNEL)) {
+			mem_cgroup_cancel_charge(new_page, memcg, true);
+			result = SCAN_FAIL;
+			goto out;
+		}
+	} while (1);
+
 	__SetPageLocked(new_page);
 	__SetPageSwapBacked(new_page);
 	new_page->index = start;
@@ -1340,17 +1354,6 @@  static void collapse_shmem(struct mm_struct *mm,
 	 * be able to map it or use it in another way until we unlock it.
 	 */
 
-	/* This will be less messy when we use multi-index entries */
-	do {
-		xas_lock_irq(&xas);
-		xas_create_range(&xas);
-		if (!xas_error(&xas))
-			break;
-		xas_unlock_irq(&xas);
-		if (!xas_nomem(&xas, GFP_KERNEL))
-			goto out;
-	} while (1);
-
 	xas_set(&xas, start);
 	for (index = start; index < end; index++) {
 		struct page *page = xas_next(&xas);