diff mbox series

[v3,1/4] mm/gup: add compound page list iterator

Message ID 20210205204127.29441-2-joao.m.martins@oracle.com (mailing list archive)
State New, archived
Headers show
Series mm/gup: page unpining improvements | expand

Commit Message

Joao Martins Feb. 5, 2021, 8:41 p.m. UTC
Add an helper that iterates over head pages in a list of pages. It
essentially counts the tails until the next page to process has a
different head that the current. This is going to be used by
unpin_user_pages() family of functions, to batch the head page refcount
updates once for all passed consecutive tail pages.

Suggested-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
---
 mm/gup.c | 26 ++++++++++++++++++++++++++
 1 file changed, 26 insertions(+)

Comments

Jason Gunthorpe Feb. 10, 2021, 11:20 p.m. UTC | #1
On Fri, Feb 05, 2021 at 08:41:24PM +0000, Joao Martins wrote:
> Add an helper that iterates over head pages in a list of pages. It
> essentially counts the tails until the next page to process has a
> different head that the current. This is going to be used by
> unpin_user_pages() family of functions, to batch the head page refcount
> updates once for all passed consecutive tail pages.
> 
> Suggested-by: Jason Gunthorpe <jgg@nvidia.com>
> Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
> Reviewed-by: John Hubbard <jhubbard@nvidia.com>
> ---
>  mm/gup.c | 26 ++++++++++++++++++++++++++
>  1 file changed, 26 insertions(+)

Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

This can be used for check_and_migrate_cma_pages() too (there is a
series around to change this logic though, not sure if it is landed
yet)

Jason
Joao Martins Feb. 11, 2021, 10:45 a.m. UTC | #2
On 2/10/21 11:20 PM, Jason Gunthorpe wrote:
> On Fri, Feb 05, 2021 at 08:41:24PM +0000, Joao Martins wrote:
>> Add an helper that iterates over head pages in a list of pages. It
>> essentially counts the tails until the next page to process has a
>> different head that the current. This is going to be used by
>> unpin_user_pages() family of functions, to batch the head page refcount
>> updates once for all passed consecutive tail pages.
>>
>> Suggested-by: Jason Gunthorpe <jgg@nvidia.com>
>> Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
>> Reviewed-by: John Hubbard <jhubbard@nvidia.com>
>> ---
>>  mm/gup.c | 26 ++++++++++++++++++++++++++
>>  1 file changed, 26 insertions(+)
> 
> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
> 
Thanks!

> This can be used for check_and_migrate_cma_pages() too (there is a
> series around to change this logic though, not sure if it is landed
> yet)

It got unqueued AFAIUI.

It makes sense for most users today except hugetlb pages, which are also
the fastest page pinner today. And unilaterally using this iterator makes
all page types pay the added cost. So either keeping the current loop having
the exception to PageHuge() head pages, or doing it correctly with that
split logic we were talking on the other thread.

	Joao
diff mbox series

Patch

diff --git a/mm/gup.c b/mm/gup.c
index d68bcb482b11..8defe4f670d5 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -215,6 +215,32 @@  void unpin_user_page(struct page *page)
 }
 EXPORT_SYMBOL(unpin_user_page);
 
+static inline void compound_next(unsigned long i, unsigned long npages,
+				 struct page **list, struct page **head,
+				 unsigned int *ntails)
+{
+	struct page *page;
+	unsigned int nr;
+
+	if (i >= npages)
+		return;
+
+	page = compound_head(list[i]);
+	for (nr = i + 1; nr < npages; nr++) {
+		if (compound_head(list[nr]) != page)
+			break;
+	}
+
+	*head = page;
+	*ntails = nr - i;
+}
+
+#define for_each_compound_head(__i, __list, __npages, __head, __ntails) \
+	for (__i = 0, \
+	     compound_next(__i, __npages, __list, &(__head), &(__ntails)); \
+	     __i < __npages; __i += __ntails, \
+	     compound_next(__i, __npages, __list, &(__head), &(__ntails)))
+
 /**
  * unpin_user_pages_dirty_lock() - release and optionally dirty gup-pinned pages
  * @pages:  array of pages to be maybe marked dirty, and definitely released.