diff mbox series

[09/25] mm, compaction: Use the page allocator bulk-free helper for lists of pages

Message ID 20190104125011.16071-10-mgorman@techsingularity.net (mailing list archive)
State New, archived
Headers show
Series Increase success rates and reduce latency of compaction v2 | expand

Commit Message

Mel Gorman Jan. 4, 2019, 12:49 p.m. UTC
release_pages() is a simpler version of free_unref_page_list() but it
tracks the highest PFN for caching the restart point of the compaction
free scanner. This patch optionally tracks the highest PFN in the core
helper and converts compaction to use it. The performance impact is
limited but it should reduce lock contention slightly in some cases.
The main benefit is removing some partially duplicated code.

Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
---
 include/linux/gfp.h |  7 ++++++-
 mm/compaction.c     | 12 +++---------
 mm/page_alloc.c     | 10 +++++++++-
 3 files changed, 18 insertions(+), 11 deletions(-)

Comments

Vlastimil Babka Jan. 15, 2019, 12:39 p.m. UTC | #1
On 1/4/19 1:49 PM, Mel Gorman wrote:
> release_pages() is a simpler version of free_unref_page_list() but it
> tracks the highest PFN for caching the restart point of the compaction
> free scanner. This patch optionally tracks the highest PFN in the core
> helper and converts compaction to use it. The performance impact is
> limited but it should reduce lock contention slightly in some cases.
> The main benefit is removing some partially duplicated code.
> 
> Signed-off-by: Mel Gorman <mgorman@techsingularity.net>

...

> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2876,18 +2876,26 @@ void free_unref_page(struct page *page)
>  /*
>   * Free a list of 0-order pages
>   */
> -void free_unref_page_list(struct list_head *list)
> +void __free_page_list(struct list_head *list, bool dropref,
> +				unsigned long *highest_pfn)
>  {
>  	struct page *page, *next;
>  	unsigned long flags, pfn;
>  	int batch_count = 0;
>  
> +	if (highest_pfn)
> +		*highest_pfn = 0;
> +
>  	/* Prepare pages for freeing */
>  	list_for_each_entry_safe(page, next, list, lru) {
> +		if (dropref)
> +			WARN_ON_ONCE(!put_page_testzero(page));

I've thought about it again and still think it can cause spurious
warnings. We enter this function with one page pin, which means somebody
else might be doing pfn scanning and get_page_unless_zero() with
success, so there are two pins. Then we do the put_page_testzero() above
and go back to one pin, and warn. You said "this function simply does
not expect it and the callers do not violate the rule", but this is
rather about potential parallel pfn scanning activity and not about this
function's callers. Maybe there really is no parallel pfn scanner that
would try to pin a page with a state the page has when it's processed by
this function, but I wouldn't bet on it (any state checks preceding the
pin might also be racy etc.).

>  		pfn = page_to_pfn(page);
>  		if (!free_unref_page_prepare(page, pfn))
>  			list_del(&page->lru);
>  		set_page_private(page, pfn);
> +		if (highest_pfn && pfn > *highest_pfn)
> +			*highest_pfn = pfn;
>  	}
>  
>  	local_irq_save(flags);
>
Mel Gorman Jan. 16, 2019, 9:46 a.m. UTC | #2
On Tue, Jan 15, 2019 at 01:39:28PM +0100, Vlastimil Babka wrote:
> On 1/4/19 1:49 PM, Mel Gorman wrote:
> > release_pages() is a simpler version of free_unref_page_list() but it
> > tracks the highest PFN for caching the restart point of the compaction
> > free scanner. This patch optionally tracks the highest PFN in the core
> > helper and converts compaction to use it. The performance impact is
> > limited but it should reduce lock contention slightly in some cases.
> > The main benefit is removing some partially duplicated code.
> > 
> > Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
> 
> ...
> 
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -2876,18 +2876,26 @@ void free_unref_page(struct page *page)
> >  /*
> >   * Free a list of 0-order pages
> >   */
> > -void free_unref_page_list(struct list_head *list)
> > +void __free_page_list(struct list_head *list, bool dropref,
> > +				unsigned long *highest_pfn)
> >  {
> >  	struct page *page, *next;
> >  	unsigned long flags, pfn;
> >  	int batch_count = 0;
> >  
> > +	if (highest_pfn)
> > +		*highest_pfn = 0;
> > +
> >  	/* Prepare pages for freeing */
> >  	list_for_each_entry_safe(page, next, list, lru) {
> > +		if (dropref)
> > +			WARN_ON_ONCE(!put_page_testzero(page));
> 
> I've thought about it again and still think it can cause spurious
> warnings. We enter this function with one page pin, which means somebody
> else might be doing pfn scanning and get_page_unless_zero() with
> success, so there are two pins. Then we do the put_page_testzero() above
> and go back to one pin, and warn. You said "this function simply does
> not expect it and the callers do not violate the rule", but this is
> rather about potential parallel pfn scanning activity and not about this
> function's callers. Maybe there really is no parallel pfn scanner that
> would try to pin a page with a state the page has when it's processed by
> this function, but I wouldn't bet on it (any state checks preceding the
> pin might also be racy etc.).
> 

Ok, I'll drop this patch because in theory you're right. I wouldn't think
that parallel PFN scanning is likely to trigger it but gup is a potential
issue. While this also will increase CPU usage slightly again, it'll be
no worse than it was before and again, I don't want to stall the entire
series over a relatively small optimisation.

Thanks Vlastimil!
diff mbox series

Patch

diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index 5f5e25fd6149..9e58799b730f 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -543,7 +543,12 @@  void * __meminit alloc_pages_exact_nid(int nid, size_t size, gfp_t gfp_mask);
 extern void __free_pages(struct page *page, unsigned int order);
 extern void free_pages(unsigned long addr, unsigned int order);
 extern void free_unref_page(struct page *page);
-extern void free_unref_page_list(struct list_head *list);
+extern void __free_page_list(struct list_head *list, bool dropref, unsigned long *highest_pfn);
+
+static inline void free_unref_page_list(struct list_head *list)
+{
+	return __free_page_list(list, false, NULL);
+}
 
 struct page_frag_cache;
 extern void __page_frag_cache_drain(struct page *page, unsigned int count);
diff --git a/mm/compaction.c b/mm/compaction.c
index 8bf2090231a3..8f0ce44dba41 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -52,16 +52,10 @@  static inline void count_compact_events(enum vm_event_item item, long delta)
 
 static unsigned long release_freepages(struct list_head *freelist)
 {
-	struct page *page, *next;
-	unsigned long high_pfn = 0;
+	unsigned long high_pfn;
 
-	list_for_each_entry_safe(page, next, freelist, lru) {
-		unsigned long pfn = page_to_pfn(page);
-		list_del(&page->lru);
-		__free_page(page);
-		if (pfn > high_pfn)
-			high_pfn = pfn;
-	}
+	__free_page_list(freelist, true, &high_pfn);
+	INIT_LIST_HEAD(freelist);
 
 	return high_pfn;
 }
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index cde5dac6229a..57ba9d1da519 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2876,18 +2876,26 @@  void free_unref_page(struct page *page)
 /*
  * Free a list of 0-order pages
  */
-void free_unref_page_list(struct list_head *list)
+void __free_page_list(struct list_head *list, bool dropref,
+				unsigned long *highest_pfn)
 {
 	struct page *page, *next;
 	unsigned long flags, pfn;
 	int batch_count = 0;
 
+	if (highest_pfn)
+		*highest_pfn = 0;
+
 	/* Prepare pages for freeing */
 	list_for_each_entry_safe(page, next, list, lru) {
+		if (dropref)
+			WARN_ON_ONCE(!put_page_testzero(page));
 		pfn = page_to_pfn(page);
 		if (!free_unref_page_prepare(page, pfn))
 			list_del(&page->lru);
 		set_page_private(page, pfn);
+		if (highest_pfn && pfn > *highest_pfn)
+			*highest_pfn = pfn;
 	}
 
 	local_irq_save(flags);