diff mbox series

[08/17] gup: Add try_grab_folio()

Message ID 20220102215729.2943705-9-willy@infradead.org (mailing list archive)
State New
Headers show
Series Convert GUP to folios | expand

Commit Message

Matthew Wilcox Jan. 2, 2022, 9:57 p.m. UTC
try_grab_compound_head() is turned into a call to try_grab_folio().
Convert the two callers who only care about a boolean success/fail.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/mm.h |  4 +---
 mm/gup.c           | 25 +++++++++++++------------
 mm/hugetlb.c       |  7 +++----
 3 files changed, 17 insertions(+), 19 deletions(-)

Comments

Christoph Hellwig Jan. 4, 2022, 8:24 a.m. UTC | #1
On Sun, Jan 02, 2022 at 09:57:20PM +0000, Matthew Wilcox (Oracle) wrote:
>  /**
> - * try_grab_compound_head() - attempt to elevate a page's refcount, by a
> + * try_grab_folio() - attempt to elevate a page's refcount, by a

s/page/folio/ ?

> - *
>   * @page:  pointer to page to be grabbed

and here something about page inside a folio?

Otherwise this looks fine, but I wonder if it would make more sense
to already introduce try_grab_folio earlier when you convert
try_grab_compound_head to use folios internally.
John Hubbard Jan. 5, 2022, 7:06 a.m. UTC | #2
On 1/2/22 13:57, Matthew Wilcox (Oracle) wrote:
> try_grab_compound_head() is turned into a call to try_grab_folio().
> Convert the two callers who only care about a boolean success/fail.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>   include/linux/mm.h |  4 +---
>   mm/gup.c           | 25 +++++++++++++------------
>   mm/hugetlb.c       |  7 +++----
>   3 files changed, 17 insertions(+), 19 deletions(-)
> 

Reviewed-by: John Hubbard <jhubbard@nvidia.com>


thanks,
diff mbox series

Patch

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 602de23482ef..4e763a590c9c 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1202,9 +1202,7 @@  static inline void get_page(struct page *page)
 }
 
 bool __must_check try_grab_page(struct page *page, unsigned int flags);
-struct page *try_grab_compound_head(struct page *page, int refs,
-				    unsigned int flags);
-
+struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags);
 
 static inline __must_check bool try_get_page(struct page *page)
 {
diff --git a/mm/gup.c b/mm/gup.c
index 6d827f7d66d8..2307b2917055 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -76,12 +76,8 @@  static inline struct folio *try_get_folio(struct page *page, int refs)
 }
 
 /**
- * try_grab_compound_head() - attempt to elevate a page's refcount, by a
+ * try_grab_folio() - attempt to elevate a page's refcount, by a
  * flags-dependent amount.
- *
- * Even though the name includes "compound_head", this function is still
- * appropriate for callers that have a non-compound @page to get.
- *
  * @page:  pointer to page to be grabbed
  * @refs:  the value to (effectively) add to the page's refcount
  * @flags: gup flags: these are the FOLL_* flag values.
@@ -102,16 +98,15 @@  static inline struct folio *try_get_folio(struct page *page, int refs)
  *    FOLL_PIN on normal pages, or compound pages that are two pages long:
  *    page's refcount will be incremented by @refs * GUP_PIN_COUNTING_BIAS.
  *
- * Return: head page (with refcount appropriately incremented) for success, or
+ * Return: folio (with refcount appropriately incremented) for success, or
  * NULL upon failure. If neither FOLL_GET nor FOLL_PIN was set, that's
  * considered failure, and furthermore, a likely bug in the caller, so a warning
  * is also emitted.
  */
-struct page *try_grab_compound_head(struct page *page,
-				    int refs, unsigned int flags)
+struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags)
 {
 	if (flags & FOLL_GET)
-		return &try_get_folio(page, refs)->page;
+		return try_get_folio(page, refs);
 	else if (flags & FOLL_PIN) {
 		struct folio *folio;
 
@@ -150,13 +145,19 @@  struct page *try_grab_compound_head(struct page *page,
 
 		node_stat_mod_folio(folio, NR_FOLL_PIN_ACQUIRED, refs);
 
-		return &folio->page;
+		return folio;
 	}
 
 	WARN_ON_ONCE(1);
 	return NULL;
 }
 
+static inline struct page *try_grab_compound_head(struct page *page,
+		int refs, unsigned int flags)
+{
+	return &try_grab_folio(page, refs, flags)->page;
+}
+
 static void gup_put_folio(struct folio *folio, int refs, unsigned int flags)
 {
 	if (flags & FOLL_PIN) {
@@ -188,7 +189,7 @@  static void put_compound_head(struct page *page, int refs, unsigned int flags)
  * @flags:   gup flags: these are the FOLL_* flag values.
  *
  * Either FOLL_PIN or FOLL_GET (or neither) may be set, but not both at the same
- * time. Cases: please see the try_grab_compound_head() documentation, with
+ * time. Cases: please see the try_grab_folio() documentation, with
  * "refs=1".
  *
  * Return: true for success, or if no action was required (if neither FOLL_PIN
@@ -200,7 +201,7 @@  bool __must_check try_grab_page(struct page *page, unsigned int flags)
 	if (!(flags & (FOLL_GET | FOLL_PIN)))
 		return true;
 
-	return try_grab_compound_head(page, 1, flags);
+	return try_grab_folio(page, 1, flags);
 }
 
 /**
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index abcd1785c629..ab67b13c4a71 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6072,7 +6072,7 @@  long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
 
 		if (pages) {
 			/*
-			 * try_grab_compound_head() should always succeed here,
+			 * try_grab_folio() should always succeed here,
 			 * because: a) we hold the ptl lock, and b) we've just
 			 * checked that the huge page is present in the page
 			 * tables. If the huge page is present, then the tail
@@ -6081,9 +6081,8 @@  long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
 			 * any way. So this page must be available at this
 			 * point, unless the page refcount overflowed:
 			 */
-			if (WARN_ON_ONCE(!try_grab_compound_head(pages[i],
-								 refs,
-								 flags))) {
+			if (WARN_ON_ONCE(!try_grab_folio(pages[i], refs,
+							 flags))) {
 				spin_unlock(ptl);
 				remainder = 0;
 				err = -ENOMEM;