diff mbox series

[04/17] mm: Convert page_maybe_dma_pinned() to use a folio

Message ID 20220102215729.2943705-5-willy@infradead.org (mailing list archive)
State New
Headers show
Series Convert GUP to folios | expand

Commit Message

Matthew Wilcox Jan. 2, 2022, 9:57 p.m. UTC
Replaces three calls to compound_head() with one.  This removes the last
user of compound_pincount(), so remove that helper too.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/mm.h | 17 ++++++-----------
 1 file changed, 6 insertions(+), 11 deletions(-)

Comments

Christoph Hellwig Jan. 4, 2022, 8:03 a.m. UTC | #1
On Sun, Jan 02, 2022 at 09:57:16PM +0000, Matthew Wilcox (Oracle) wrote:
> Replaces three calls to compound_head() with one.  This removes the last
> user of compound_pincount(), so remove that helper too.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>
John Hubbard Jan. 4, 2022, 10:01 p.m. UTC | #2
On 1/2/22 13:57, Matthew Wilcox (Oracle) wrote:
> Replaces three calls to compound_head() with one.  This removes the last
> user of compound_pincount(), so remove that helper too.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>   include/linux/mm.h | 17 ++++++-----------
>   1 file changed, 6 insertions(+), 11 deletions(-)
> 
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 269b5484d66e..00dcea53bb96 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -947,13 +947,6 @@ static inline int head_compound_pincount(struct page *head)
>   	return atomic_read(compound_pincount_ptr(head));
>   }
>   
> -static inline int compound_pincount(struct page *page)
> -{
> -	VM_BUG_ON_PAGE(!hpage_pincount_available(page), page);
> -	page = compound_head(page);
> -	return head_compound_pincount(page);
> -}
> -

Yes, the only search hit remaining now is in a printk() string in mm/debug.c,
and that is still reasonable wording, even in the new world order, so I
think we're good:

mm/debug.c:96:  pr_warn("head:%p order:%u compound_mapcount:%d compound_pincount:%d\n",


>   static inline void set_compound_order(struct page *page, unsigned int order)
>   {
>   	page[1].compound_order = order;
> @@ -1347,18 +1340,20 @@ void unpin_user_pages(struct page **pages, unsigned long npages);
>    */
>   static inline bool page_maybe_dma_pinned(struct page *page)
>   {
> -	if (hpage_pincount_available(page))
> -		return compound_pincount(page) > 0;
> +	struct folio *folio = page_folio(page);
> +
> +	if (folio_pincount_available(folio))
> +		return atomic_read(folio_pincount_ptr(folio)) > 0;
>   
>   	/*
>   	 * page_ref_count() is signed. If that refcount overflows, then
>   	 * page_ref_count() returns a negative value, and callers will avoid
>   	 * further incrementing the refcount.
>   	 *
> -	 * Here, for that overflow case, use the signed bit to count a little
> +	 * Here, for that overflow case, use the sign bit to count a little
>   	 * bit higher via unsigned math, and thus still get an accurate result.
>   	 */
> -	return ((unsigned int)page_ref_count(compound_head(page))) >=
> +	return ((unsigned int)folio_ref_count(folio)) >=
>   		GUP_PIN_COUNTING_BIAS;
>   }
>   

Reviewed-by: John Hubbard <jhubbard@nvidia.com>


thanks,
diff mbox series

Patch

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 269b5484d66e..00dcea53bb96 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -947,13 +947,6 @@  static inline int head_compound_pincount(struct page *head)
 	return atomic_read(compound_pincount_ptr(head));
 }
 
-static inline int compound_pincount(struct page *page)
-{
-	VM_BUG_ON_PAGE(!hpage_pincount_available(page), page);
-	page = compound_head(page);
-	return head_compound_pincount(page);
-}
-
 static inline void set_compound_order(struct page *page, unsigned int order)
 {
 	page[1].compound_order = order;
@@ -1347,18 +1340,20 @@  void unpin_user_pages(struct page **pages, unsigned long npages);
  */
 static inline bool page_maybe_dma_pinned(struct page *page)
 {
-	if (hpage_pincount_available(page))
-		return compound_pincount(page) > 0;
+	struct folio *folio = page_folio(page);
+
+	if (folio_pincount_available(folio))
+		return atomic_read(folio_pincount_ptr(folio)) > 0;
 
 	/*
 	 * page_ref_count() is signed. If that refcount overflows, then
 	 * page_ref_count() returns a negative value, and callers will avoid
 	 * further incrementing the refcount.
 	 *
-	 * Here, for that overflow case, use the signed bit to count a little
+	 * Here, for that overflow case, use the sign bit to count a little
 	 * bit higher via unsigned math, and thus still get an accurate result.
 	 */
-	return ((unsigned int)page_ref_count(compound_head(page))) >=
+	return ((unsigned int)folio_ref_count(folio)) >=
 		GUP_PIN_COUNTING_BIAS;
 }