diff mbox series

[-V3,8/9] mm, pcp: decrease PCP high if free pages < high watermark

Message ID 20231016053002.756205-9-ying.huang@intel.com (mailing list archive)
State New
Headers show
Series mm: PCP high auto-tuning | expand

Commit Message

Huang, Ying Oct. 16, 2023, 5:30 a.m. UTC
One target of PCP is to minimize pages in PCP if the system free pages
is too few.  To reach that target, when page reclaiming is active for
the zone (ZONE_RECLAIM_ACTIVE), we will stop increasing PCP high in
allocating path, decrease PCP high and free some pages in freeing
path.  But this may be too late because the background page reclaiming
may introduce latency for some workloads.  So, in this patch, during
page allocation we will detect whether the number of free pages of the
zone is below high watermark.  If so, we will stop increasing PCP high
in allocating path, decrease PCP high and free some pages in freeing
path.  With this, we can reduce the possibility of the premature
background page reclaiming caused by too large PCP.

The high watermark checking is done in allocating path to reduce the
overhead in hotter freeing path.

Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: David Hildenbrand <david@redhat.com>
Cc: Johannes Weiner <jweiner@redhat.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Christoph Lameter <cl@linux.com>
---
 include/linux/mmzone.h |  1 +
 mm/page_alloc.c        | 33 +++++++++++++++++++++++++++++++--
 2 files changed, 32 insertions(+), 2 deletions(-)

Comments

Mel Gorman Oct. 19, 2023, 12:33 p.m. UTC | #1
On Mon, Oct 16, 2023 at 01:30:01PM +0800, Huang Ying wrote:
> One target of PCP is to minimize pages in PCP if the system free pages
> is too few.  To reach that target, when page reclaiming is active for
> the zone (ZONE_RECLAIM_ACTIVE), we will stop increasing PCP high in
> allocating path, decrease PCP high and free some pages in freeing
> path.  But this may be too late because the background page reclaiming
> may introduce latency for some workloads.  So, in this patch, during
> page allocation we will detect whether the number of free pages of the
> zone is below high watermark.  If so, we will stop increasing PCP high
> in allocating path, decrease PCP high and free some pages in freeing
> path.  With this, we can reduce the possibility of the premature
> background page reclaiming caused by too large PCP.
> 
> The high watermark checking is done in allocating path to reduce the
> overhead in hotter freeing path.
> 
> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Johannes Weiner <jweiner@redhat.com>
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Christoph Lameter <cl@linux.com>
> ---
>  include/linux/mmzone.h |  1 +
>  mm/page_alloc.c        | 33 +++++++++++++++++++++++++++++++--
>  2 files changed, 32 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index ec3f7daedcc7..c88770381aaf 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -1018,6 +1018,7 @@ enum zone_flags {
>  					 * Cleared when kswapd is woken.
>  					 */
>  	ZONE_RECLAIM_ACTIVE,		/* kswapd may be scanning the zone. */
> +	ZONE_BELOW_HIGH,		/* zone is below high watermark. */
>  };
>  
>  static inline unsigned long zone_managed_pages(struct zone *zone)
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 8382ad2cdfd4..253fc7d0498e 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2407,7 +2407,13 @@ static int nr_pcp_high(struct per_cpu_pages *pcp, struct zone *zone,
>  		return min(batch << 2, pcp->high);
>  	}
>  
> -	if (pcp->count >= high && high_min != high_max) {
> +	if (high_min == high_max)
> +		return high;
> +
> +	if (test_bit(ZONE_BELOW_HIGH, &zone->flags)) {
> +		pcp->high = max(high - (batch << pcp->free_factor), high_min);
> +		high = max(pcp->count, high_min);
> +	} else if (pcp->count >= high) {
>  		int need_high = (batch << pcp->free_factor) + batch;
>  
>  		/* pcp->high should be large enough to hold batch freed pages */
> @@ -2457,6 +2463,10 @@ static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp,
>  	if (pcp->count >= high) {
>  		free_pcppages_bulk(zone, nr_pcp_free(pcp, batch, high, free_high),
>  				   pcp, pindex);
> +		if (test_bit(ZONE_BELOW_HIGH, &zone->flags) &&
> +		    zone_watermark_ok(zone, 0, high_wmark_pages(zone),
> +				      ZONE_MOVABLE, 0))
> +			clear_bit(ZONE_BELOW_HIGH, &zone->flags);
>  	}
>  }
>  

This is a relatively fast path and freeing pages should not need to check
watermarks. While the overhead is mitigated because it applies only when
the watermark is below high, that is also potentially an unbounded condition
if a workload is sized precisely enough. Why not clear this bit when kswapd
is going to sleep after reclaiming enough pages in a zone?

If you agree then a follow-up patch classed as a micro-optimisation is
sufficient to avoid redoing all the results again. For most of your
tests, it should be performance-neutral or borderline noise.
Huang, Ying Oct. 20, 2023, 3:30 a.m. UTC | #2
Mel Gorman <mgorman@techsingularity.net> writes:

> On Mon, Oct 16, 2023 at 01:30:01PM +0800, Huang Ying wrote:
>> One target of PCP is to minimize pages in PCP if the system free pages
>> is too few.  To reach that target, when page reclaiming is active for
>> the zone (ZONE_RECLAIM_ACTIVE), we will stop increasing PCP high in
>> allocating path, decrease PCP high and free some pages in freeing
>> path.  But this may be too late because the background page reclaiming
>> may introduce latency for some workloads.  So, in this patch, during
>> page allocation we will detect whether the number of free pages of the
>> zone is below high watermark.  If so, we will stop increasing PCP high
>> in allocating path, decrease PCP high and free some pages in freeing
>> path.  With this, we can reduce the possibility of the premature
>> background page reclaiming caused by too large PCP.
>> 
>> The high watermark checking is done in allocating path to reduce the
>> overhead in hotter freeing path.
>> 
>> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
>> Cc: Andrew Morton <akpm@linux-foundation.org>
>> Cc: Mel Gorman <mgorman@techsingularity.net>
>> Cc: Vlastimil Babka <vbabka@suse.cz>
>> Cc: David Hildenbrand <david@redhat.com>
>> Cc: Johannes Weiner <jweiner@redhat.com>
>> Cc: Dave Hansen <dave.hansen@linux.intel.com>
>> Cc: Michal Hocko <mhocko@suse.com>
>> Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
>> Cc: Matthew Wilcox <willy@infradead.org>
>> Cc: Christoph Lameter <cl@linux.com>
>> ---
>>  include/linux/mmzone.h |  1 +
>>  mm/page_alloc.c        | 33 +++++++++++++++++++++++++++++++--
>>  2 files changed, 32 insertions(+), 2 deletions(-)
>> 
>> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
>> index ec3f7daedcc7..c88770381aaf 100644
>> --- a/include/linux/mmzone.h
>> +++ b/include/linux/mmzone.h
>> @@ -1018,6 +1018,7 @@ enum zone_flags {
>>  					 * Cleared when kswapd is woken.
>>  					 */
>>  	ZONE_RECLAIM_ACTIVE,		/* kswapd may be scanning the zone. */
>> +	ZONE_BELOW_HIGH,		/* zone is below high watermark. */
>>  };
>>  
>>  static inline unsigned long zone_managed_pages(struct zone *zone)
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 8382ad2cdfd4..253fc7d0498e 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -2407,7 +2407,13 @@ static int nr_pcp_high(struct per_cpu_pages *pcp, struct zone *zone,
>>  		return min(batch << 2, pcp->high);
>>  	}
>>  
>> -	if (pcp->count >= high && high_min != high_max) {
>> +	if (high_min == high_max)
>> +		return high;
>> +
>> +	if (test_bit(ZONE_BELOW_HIGH, &zone->flags)) {
>> +		pcp->high = max(high - (batch << pcp->free_factor), high_min);
>> +		high = max(pcp->count, high_min);
>> +	} else if (pcp->count >= high) {
>>  		int need_high = (batch << pcp->free_factor) + batch;
>>  
>>  		/* pcp->high should be large enough to hold batch freed pages */
>> @@ -2457,6 +2463,10 @@ static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp,
>>  	if (pcp->count >= high) {
>>  		free_pcppages_bulk(zone, nr_pcp_free(pcp, batch, high, free_high),
>>  				   pcp, pindex);
>> +		if (test_bit(ZONE_BELOW_HIGH, &zone->flags) &&
>> +		    zone_watermark_ok(zone, 0, high_wmark_pages(zone),
>> +				      ZONE_MOVABLE, 0))
>> +			clear_bit(ZONE_BELOW_HIGH, &zone->flags);
>>  	}
>>  }
>>  
>
> This is a relatively fast path and freeing pages should not need to check
> watermarks.

Another stuff that mitigate the overhead is that the watermarks checking
only occurs when we free pages from PCP to buddy.  That is, in most
cases, every 63 page freeing.

> While the overhead is mitigated because it applies only when
> the watermark is below high, that is also potentially an unbounded condition
> if a workload is sized precisely enough. Why not clear this bit when kswapd
> is going to sleep after reclaiming enough pages in a zone?

IIUC, if the number of free pages is kept larger than the low watermark,
then kswapd will have no opportunity to be waken up even if the number
of free pages was ever smaller than the high watermark.

> If you agree then a follow-up patch classed as a micro-optimisation is
> sufficient to avoid redoing all the results again. For most of your
> tests, it should be performance-neutral or borderline noise.

--
Best Regards,
Huang, Ying
Mel Gorman Oct. 23, 2023, 9:26 a.m. UTC | #3
On Fri, Oct 20, 2023 at 11:30:53AM +0800, Huang, Ying wrote:
> Mel Gorman <mgorman@techsingularity.net> writes:
> 
> > On Mon, Oct 16, 2023 at 01:30:01PM +0800, Huang Ying wrote:
> >> One target of PCP is to minimize pages in PCP if the system free pages
> >> is too few.  To reach that target, when page reclaiming is active for
> >> the zone (ZONE_RECLAIM_ACTIVE), we will stop increasing PCP high in
> >> allocating path, decrease PCP high and free some pages in freeing
> >> path.  But this may be too late because the background page reclaiming
> >> may introduce latency for some workloads.  So, in this patch, during
> >> page allocation we will detect whether the number of free pages of the
> >> zone is below high watermark.  If so, we will stop increasing PCP high
> >> in allocating path, decrease PCP high and free some pages in freeing
> >> path.  With this, we can reduce the possibility of the premature
> >> background page reclaiming caused by too large PCP.
> >> 
> >> The high watermark checking is done in allocating path to reduce the
> >> overhead in hotter freeing path.
> >> 
> >> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
> >> Cc: Andrew Morton <akpm@linux-foundation.org>
> >> Cc: Mel Gorman <mgorman@techsingularity.net>
> >> Cc: Vlastimil Babka <vbabka@suse.cz>
> >> Cc: David Hildenbrand <david@redhat.com>
> >> Cc: Johannes Weiner <jweiner@redhat.com>
> >> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> >> Cc: Michal Hocko <mhocko@suse.com>
> >> Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
> >> Cc: Matthew Wilcox <willy@infradead.org>
> >> Cc: Christoph Lameter <cl@linux.com>
> >> ---
> >>  include/linux/mmzone.h |  1 +
> >>  mm/page_alloc.c        | 33 +++++++++++++++++++++++++++++++--
> >>  2 files changed, 32 insertions(+), 2 deletions(-)
> >> 
> >> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> >> index ec3f7daedcc7..c88770381aaf 100644
> >> --- a/include/linux/mmzone.h
> >> +++ b/include/linux/mmzone.h
> >> @@ -1018,6 +1018,7 @@ enum zone_flags {
> >>  					 * Cleared when kswapd is woken.
> >>  					 */
> >>  	ZONE_RECLAIM_ACTIVE,		/* kswapd may be scanning the zone. */
> >> +	ZONE_BELOW_HIGH,		/* zone is below high watermark. */
> >>  };
> >>  
> >>  static inline unsigned long zone_managed_pages(struct zone *zone)
> >> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> >> index 8382ad2cdfd4..253fc7d0498e 100644
> >> --- a/mm/page_alloc.c
> >> +++ b/mm/page_alloc.c
> >> @@ -2407,7 +2407,13 @@ static int nr_pcp_high(struct per_cpu_pages *pcp, struct zone *zone,
> >>  		return min(batch << 2, pcp->high);
> >>  	}
> >>  
> >> -	if (pcp->count >= high && high_min != high_max) {
> >> +	if (high_min == high_max)
> >> +		return high;
> >> +
> >> +	if (test_bit(ZONE_BELOW_HIGH, &zone->flags)) {
> >> +		pcp->high = max(high - (batch << pcp->free_factor), high_min);
> >> +		high = max(pcp->count, high_min);
> >> +	} else if (pcp->count >= high) {
> >>  		int need_high = (batch << pcp->free_factor) + batch;
> >>  
> >>  		/* pcp->high should be large enough to hold batch freed pages */
> >> @@ -2457,6 +2463,10 @@ static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp,
> >>  	if (pcp->count >= high) {
> >>  		free_pcppages_bulk(zone, nr_pcp_free(pcp, batch, high, free_high),
> >>  				   pcp, pindex);
> >> +		if (test_bit(ZONE_BELOW_HIGH, &zone->flags) &&
> >> +		    zone_watermark_ok(zone, 0, high_wmark_pages(zone),
> >> +				      ZONE_MOVABLE, 0))
> >> +			clear_bit(ZONE_BELOW_HIGH, &zone->flags);
> >>  	}
> >>  }
> >>  
> >
> > This is a relatively fast path and freeing pages should not need to check
> > watermarks.
> 
> Another stuff that mitigate the overhead is that the watermarks checking
> only occurs when we free pages from PCP to buddy.  That is, in most
> cases, every 63 page freeing.
> 

True

> > While the overhead is mitigated because it applies only when
> > the watermark is below high, that is also potentially an unbounded condition
> > if a workload is sized precisely enough. Why not clear this bit when kswapd
> > is going to sleep after reclaiming enough pages in a zone?
> 
> IIUC, if the number of free pages is kept larger than the low watermark,
> then kswapd will have no opportunity to be waken up even if the number
> of free pages was ever smaller than the high watermark.
> 

Also true and I did think of that later. I guess it's ok, the chances
are that the series overall offsets any micro-costs like this so I'm
happy. If, for some reason, this overhead is noticable (doubtful), then
it can be revisted.

Thanks.
diff mbox series

Patch

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index ec3f7daedcc7..c88770381aaf 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -1018,6 +1018,7 @@  enum zone_flags {
 					 * Cleared when kswapd is woken.
 					 */
 	ZONE_RECLAIM_ACTIVE,		/* kswapd may be scanning the zone. */
+	ZONE_BELOW_HIGH,		/* zone is below high watermark. */
 };
 
 static inline unsigned long zone_managed_pages(struct zone *zone)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 8382ad2cdfd4..253fc7d0498e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2407,7 +2407,13 @@  static int nr_pcp_high(struct per_cpu_pages *pcp, struct zone *zone,
 		return min(batch << 2, pcp->high);
 	}
 
-	if (pcp->count >= high && high_min != high_max) {
+	if (high_min == high_max)
+		return high;
+
+	if (test_bit(ZONE_BELOW_HIGH, &zone->flags)) {
+		pcp->high = max(high - (batch << pcp->free_factor), high_min);
+		high = max(pcp->count, high_min);
+	} else if (pcp->count >= high) {
 		int need_high = (batch << pcp->free_factor) + batch;
 
 		/* pcp->high should be large enough to hold batch freed pages */
@@ -2457,6 +2463,10 @@  static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp,
 	if (pcp->count >= high) {
 		free_pcppages_bulk(zone, nr_pcp_free(pcp, batch, high, free_high),
 				   pcp, pindex);
+		if (test_bit(ZONE_BELOW_HIGH, &zone->flags) &&
+		    zone_watermark_ok(zone, 0, high_wmark_pages(zone),
+				      ZONE_MOVABLE, 0))
+			clear_bit(ZONE_BELOW_HIGH, &zone->flags);
 	}
 }
 
@@ -2763,7 +2773,7 @@  static int nr_pcp_alloc(struct per_cpu_pages *pcp, struct zone *zone, int order)
 	 * If we had larger pcp->high, we could avoid to allocate from
 	 * zone.
 	 */
-	if (high_min != high_max && !test_bit(ZONE_RECLAIM_ACTIVE, &zone->flags))
+	if (high_min != high_max && !test_bit(ZONE_BELOW_HIGH, &zone->flags))
 		high = pcp->high = min(high + batch, high_max);
 
 	if (!order) {
@@ -3225,6 +3235,25 @@  get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
 			}
 		}
 
+		/*
+		 * Detect whether the number of free pages is below high
+		 * watermark.  If so, we will decrease pcp->high and free
+		 * PCP pages in free path to reduce the possibility of
+		 * premature page reclaiming.  Detection is done here to
+		 * avoid to do that in hotter free path.
+		 */
+		if (test_bit(ZONE_BELOW_HIGH, &zone->flags))
+			goto check_alloc_wmark;
+
+		mark = high_wmark_pages(zone);
+		if (zone_watermark_fast(zone, order, mark,
+					ac->highest_zoneidx, alloc_flags,
+					gfp_mask))
+			goto try_this_zone;
+		else
+			set_bit(ZONE_BELOW_HIGH, &zone->flags);
+
+check_alloc_wmark:
 		mark = wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK);
 		if (!zone_watermark_fast(zone, order, mark,
 				       ac->highest_zoneidx, alloc_flags,