diff mbox series

mm, dump_page: rename head_mapcount() --> head_compound_mapcount()

Message ID 20200807183358.105097-1-jhubbard@nvidia.com (mailing list archive)
State New, archived
Headers show
Series mm, dump_page: rename head_mapcount() --> head_compound_mapcount() | expand

Commit Message

John Hubbard Aug. 7, 2020, 6:33 p.m. UTC
And similarly, rename head_pincount() --> head_compound_pincount().
These names are more accurate (or less misleading) than the original
ones.

Cc: Qian Cai <cai@lca.pw>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: John Hubbard <jhubbard@nvidia.com>
---

Hi,

This is a follow-up patch to v2 of "mm, dump_page: do not crash with bad 
compound_mapcount()", which I see has has already been sent as part of 
Andrew's pull request. Otherwise I would have sent a v3.

Of course, if it's somehow not too late, then squashing this patch into 
that one, effectively creating a v3 with the preferred names, would be a 
nice touch.

thanks,
John Hubbard

 include/linux/mm.h | 8 ++++----
 mm/debug.c         | 6 +++---
 2 files changed, 7 insertions(+), 7 deletions(-)
diff mbox series

Patch

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 8ab941cf73f4..98d379d9806f 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -776,7 +776,7 @@  static inline void *kvcalloc(size_t n, size_t size, gfp_t flags)
 extern void kvfree(const void *addr);
 extern void kvfree_sensitive(const void *addr, size_t len);
 
-static inline int head_mapcount(struct page *head)
+static inline int head_compound_mapcount(struct page *head)
 {
 	return atomic_read(compound_mapcount_ptr(head)) + 1;
 }
@@ -790,7 +790,7 @@  static inline int compound_mapcount(struct page *page)
 {
 	VM_BUG_ON_PAGE(!PageCompound(page), page);
 	page = compound_head(page);
-	return head_mapcount(page);
+	return head_compound_mapcount(page);
 }
 
 /*
@@ -903,7 +903,7 @@  static inline bool hpage_pincount_available(struct page *page)
 	return PageCompound(page) && compound_order(page) > 1;
 }
 
-static inline int head_pincount(struct page *head)
+static inline int head_compound_pincount(struct page *head)
 {
 	return atomic_read(compound_pincount_ptr(head));
 }
@@ -912,7 +912,7 @@  static inline int compound_pincount(struct page *page)
 {
 	VM_BUG_ON_PAGE(!hpage_pincount_available(page), page);
 	page = compound_head(page);
-	return head_pincount(page);
+	return head_compound_pincount(page);
 }
 
 static inline void set_compound_order(struct page *page, unsigned int order)
diff --git a/mm/debug.c b/mm/debug.c
index 69b60637112b..a0c060abf1e7 100644
--- a/mm/debug.c
+++ b/mm/debug.c
@@ -102,12 +102,12 @@  void __dump_page(struct page *page, const char *reason)
 		if (hpage_pincount_available(page)) {
 			pr_warn("head:%p order:%u compound_mapcount:%d compound_pincount:%d\n",
 					head, compound_order(head),
-					head_mapcount(head),
-					head_pincount(head));
+					head_compound_mapcount(head),
+					head_compound_pincount(head));
 		} else {
 			pr_warn("head:%p order:%u compound_mapcount:%d\n",
 					head, compound_order(head),
-					head_mapcount(head));
+					head_compound_mapcount(head));
 		}
 	}
 	if (PageKsm(page))