diff mbox series

mm, page_alloc: use check_pages_enabled static key to check tail pages

Message ID 20230405142840.11068-1-vbabka@suse.cz (mailing list archive)
State Mainlined
Commit 8666925c498674426de44ecba79fd8bf42d3cda3
Headers show
Series mm, page_alloc: use check_pages_enabled static key to check tail pages | expand

Commit Message

Vlastimil Babka April 5, 2023, 2:28 p.m. UTC
Commit 700d2e9a36b9 ("mm, page_alloc: reduce page alloc/free sanity
checks") has introduced a new static key check_pages_enabled to control
when struct pages are sanity checked during allocation and freeing. Mel
Gorman suggested that free_tail_pages_check() could use this static key
as well, instead of relying on CONFIG_DEBUG_VM. That makes sense, so do
that. Also rename the function to free_tail_page_prepare() because it
works on a single tail page and has a struct page preparation component
as well as the optional checking component.
Also remove some unnecessary unlikely() within static_branch_unlikely()
statements that Mel pointed out for commit 700d2e9a36b9.

Suggested-by: Mel Gorman <mgorman@techsingularity.net>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
 mm/hugetlb_vmemmap.c |  2 +-
 mm/page_alloc.c      | 10 +++++-----
 2 files changed, 6 insertions(+), 6 deletions(-)

Comments

Mel Gorman April 5, 2023, 2:50 p.m. UTC | #1
On Wed, Apr 05, 2023 at 04:28:40PM +0200, Vlastimil Babka wrote:
> Commit 700d2e9a36b9 ("mm, page_alloc: reduce page alloc/free sanity
> checks") has introduced a new static key check_pages_enabled to control
> when struct pages are sanity checked during allocation and freeing. Mel
> Gorman suggested that free_tail_pages_check() could use this static key
> as well, instead of relying on CONFIG_DEBUG_VM. That makes sense, so do
> that. Also rename the function to free_tail_page_prepare() because it
> works on a single tail page and has a struct page preparation component
> as well as the optional checking component.
> Also remove some unnecessary unlikely() within static_branch_unlikely()
> statements that Mel pointed out for commit 700d2e9a36b9.
> 
> Suggested-by: Mel Gorman <mgorman@techsingularity.net>
> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>

Acked-by: Mel Gorman <mgorman@techsingularity.net>
diff mbox series

Patch

diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index a15cc56cf70a..656b00d1a2fb 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -264,7 +264,7 @@  static void vmemmap_remap_pte(pte_t *pte, unsigned long addr,
  * How many struct page structs need to be reset. When we reuse the head
  * struct page, the special metadata (e.g. page->flags or page->mapping)
  * cannot copy to the tail struct page structs. The invalid value will be
- * checked in the free_tail_pages_check(). In order to avoid the message
+ * checked in the free_tail_page_prepare(). In order to avoid the message
  * of "corrupted mapping in tail page". We need to reset at least 3 (one
  * head struct page struct and two tail struct page structs) struct page
  * structs.
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index a109444e9f44..7df5bf07e013 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1308,7 +1308,7 @@  static inline bool free_page_is_bad(struct page *page)
 	return true;
 }
 
-static int free_tail_pages_check(struct page *head_page, struct page *page)
+static int free_tail_page_prepare(struct page *head_page, struct page *page)
 {
 	struct folio *folio = (struct folio *)head_page;
 	int ret = 1;
@@ -1319,7 +1319,7 @@  static int free_tail_pages_check(struct page *head_page, struct page *page)
 	 */
 	BUILD_BUG_ON((unsigned long)LIST_POISON1 & 1);
 
-	if (!IS_ENABLED(CONFIG_DEBUG_VM)) {
+	if (!static_branch_unlikely(&check_pages_enabled)) {
 		ret = 0;
 		goto out;
 	}
@@ -1447,9 +1447,9 @@  static __always_inline bool free_pages_prepare(struct page *page,
 			ClearPageHasHWPoisoned(page);
 		for (i = 1; i < (1 << order); i++) {
 			if (compound)
-				bad += free_tail_pages_check(page, page + i);
+				bad += free_tail_page_prepare(page, page + i);
 			if (static_branch_unlikely(&check_pages_enabled)) {
-				if (unlikely(free_page_is_bad(page + i))) {
+				if (free_page_is_bad(page + i)) {
 					bad++;
 					continue;
 				}
@@ -2375,7 +2375,7 @@  static inline bool check_new_pages(struct page *page, unsigned int order)
 		for (int i = 0; i < (1 << order); i++) {
 			struct page *p = page + i;
 
-			if (unlikely(check_new_page(p)))
+			if (check_new_page(p))
 				return true;
 		}
 	}