diff mbox series

hugetlb: set hugetlb page flag before optimizing vmemmap

Message ID 20230829213734.69673-1-mike.kravetz@oracle.com (mailing list archive)
State New
Headers show
Series hugetlb: set hugetlb page flag before optimizing vmemmap | expand

Commit Message

Mike Kravetz Aug. 29, 2023, 9:37 p.m. UTC
Currently, vmemmap optimization of hugetlb pages is performed before the
hugetlb flag (previously hugetlb destructor) is set identifying it as a
hugetlb folio.  This means there is a window of time where an ordinary
folio does not have all associated vmemmap present.  The core mm only
expects vmemmap to be potentially optimized for hugetlb  and device dax.
This can cause problems in code such as memory error handling that may
want to write to tail struct pages.

There is only one call to perform hugetlb vmemmap optimization today.
To fix this issue, simply set the hugetlb flag before that call.

There was a similar issue in the free hugetlb path that was previously
addressed.  The two routines that optimize or restore hugetlb vmemmap
should only be passed hugetlb folios/pages.  To catch any callers not
following this rule, add VM_WARN_ON calls to the routines.  In the
hugetlb free code paths, some calls could be made to restore vmemmap
after clearing the hugetlb flag.  This was 'safe' as in these cases
vmemmap was already present and the call was a NOOP.  However, for
consistency these calls where eliminated so that we can add the
VM_WARN_ON checks.

Fixes: f41f2ed43ca5 ("mm: hugetlb: free the vmemmap pages associated with each HugeTLB page")
Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
---
 mm/hugetlb.c         | 31 ++++++++++++++++++++++---------
 mm/hugetlb_vmemmap.c |  3 +++
 2 files changed, 25 insertions(+), 9 deletions(-)

Comments

Muchun Song Aug. 30, 2023, 3:32 a.m. UTC | #1
> On Aug 30, 2023, at 05:37, Mike Kravetz <mike.kravetz@oracle.com> wrote:
> 
> Currently, vmemmap optimization of hugetlb pages is performed before the
> hugetlb flag (previously hugetlb destructor) is set identifying it as a
> hugetlb folio.  This means there is a window of time where an ordinary
> folio does not have all associated vmemmap present.  The core mm only
> expects vmemmap to be potentially optimized for hugetlb  and device dax.
> This can cause problems in code such as memory error handling that may
> want to write to tail struct pages.
> 
> There is only one call to perform hugetlb vmemmap optimization today.
> To fix this issue, simply set the hugetlb flag before that call.
> 
> There was a similar issue in the free hugetlb path that was previously
> addressed.  The two routines that optimize or restore hugetlb vmemmap
> should only be passed hugetlb folios/pages.  To catch any callers not
> following this rule, add VM_WARN_ON calls to the routines.  In the
> hugetlb free code paths, some calls could be made to restore vmemmap
> after clearing the hugetlb flag.  This was 'safe' as in these cases
> vmemmap was already present and the call was a NOOP.  However, for
> consistency these calls where eliminated so that we can add the
> VM_WARN_ON checks.
> 
> Fixes: f41f2ed43ca5 ("mm: hugetlb: free the vmemmap pages associated with each HugeTLB page")
> Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>

Thanks for your fix.

Reviewed-by: Muchun Song <songmuchun@bytedance.com>
diff mbox series

Patch

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index ba6d39b71cb1..c32ca241df4b 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1720,7 +1720,12 @@  static void __update_and_free_hugetlb_folio(struct hstate *h,
 	if (folio_test_hugetlb_raw_hwp_unreliable(folio))
 		return;
 
-	if (hugetlb_vmemmap_restore(h, &folio->page)) {
+	/*
+	 * If folio is not vmemmap optimized (!clear_dtor), then the folio
+	 * is no longer identified as a hugetlb page.  hugetlb_vmemmap_restore
+	 * can only be passed hugetlb pages and will BUG otherwise.
+	 */
+	if (clear_dtor && hugetlb_vmemmap_restore(h, &folio->page)) {
 		spin_lock_irq(&hugetlb_lock);
 		/*
 		 * If we cannot allocate vmemmap pages, just refuse to free the
@@ -1930,9 +1935,9 @@  static void __prep_account_new_huge_page(struct hstate *h, int nid)
 
 static void __prep_new_hugetlb_folio(struct hstate *h, struct folio *folio)
 {
+	folio_set_hugetlb(folio);
 	hugetlb_vmemmap_optimize(h, &folio->page);
 	INIT_LIST_HEAD(&folio->lru);
-	folio_set_hugetlb(folio);
 	hugetlb_set_folio_subpool(folio, NULL);
 	set_hugetlb_cgroup(folio, NULL);
 	set_hugetlb_cgroup_rsvd(folio, NULL);
@@ -3580,13 +3585,21 @@  static int demote_free_hugetlb_folio(struct hstate *h, struct folio *folio)
 	remove_hugetlb_folio_for_demote(h, folio, false);
 	spin_unlock_irq(&hugetlb_lock);
 
-	rc = hugetlb_vmemmap_restore(h, &folio->page);
-	if (rc) {
-		/* Allocation of vmemmmap failed, we can not demote folio */
-		spin_lock_irq(&hugetlb_lock);
-		folio_ref_unfreeze(folio, 1);
-		add_hugetlb_folio(h, folio, false);
-		return rc;
+	/*
+	 * If vmemmap already existed for folio, the remove routine above would
+	 * have cleared the hugetlb folio flag.  Hence the folio is technically
+	 * no longer a hugetlb folio.  hugetlb_vmemmap_restore can only be
+	 * passed hugetlb folios and will BUG otherwise.
+	 */
+	if (folio_test_hugetlb(folio)) {
+		rc = hugetlb_vmemmap_restore(h, &folio->page);
+		if (rc) {
+			/* Allocation of vmemmmap failed, we can not demote folio */
+			spin_lock_irq(&hugetlb_lock);
+			folio_ref_unfreeze(folio, 1);
+			add_hugetlb_folio(h, folio, false);
+			return rc;
+		}
 	}
 
 	/*
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 4b9734777f69..aeb7dd889eee 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -13,6 +13,7 @@ 
 #include <linux/pgtable.h>
 #include <linux/moduleparam.h>
 #include <linux/bootmem_info.h>
+#include <linux/mmdebug.h>
 #include <asm/pgalloc.h>
 #include <asm/tlbflush.h>
 #include "hugetlb_vmemmap.h"
@@ -456,6 +457,7 @@  int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head)
 	unsigned long vmemmap_start = (unsigned long)head, vmemmap_end;
 	unsigned long vmemmap_reuse;
 
+	VM_WARN_ON_ONCE(!PageHuge(head));
 	if (!HPageVmemmapOptimized(head))
 		return 0;
 
@@ -550,6 +552,7 @@  void hugetlb_vmemmap_optimize(const struct hstate *h, struct page *head)
 	unsigned long vmemmap_start = (unsigned long)head, vmemmap_end;
 	unsigned long vmemmap_reuse;
 
+	VM_WARN_ON_ONCE(!PageHuge(head));
 	if (!vmemmap_should_optimize(h, head))
 		return;