diff mbox series

[09/14] mm, compaction: Ignore the fragmentation avoidance boost for isolation and compaction

Message ID 20181214230310.572-10-mgorman@techsingularity.net (mailing list archive)
State New, archived
Headers show
Series Increase success rates and reduce latency of compaction v1 | expand

Commit Message

Mel Gorman Dec. 14, 2018, 11:03 p.m. UTC
When pageblocks get fragmented, watermarks are artifically boosted to pages
are reclaimed to avoid further fragmentation events. However, compaction
is often either fragmentation-neutral or moving movable pages away from
unmovable/reclaimable pages. As the actual watermarks are preserved,
allow compaction to ignore the boost factor.

1-socket thpscale
                                    4.20.0-rc6             4.20.0-rc6
                               finishscan-v1r4           noboost-v1r4
Amean     fault-both-1         0.00 (   0.00%)        0.00 *   0.00%*
Amean     fault-both-3      3849.90 (   0.00%)     3753.53 (   2.50%)
Amean     fault-both-5      5054.13 (   0.00%)     5396.32 (  -6.77%)
Amean     fault-both-7      7061.77 (   0.00%)     7393.46 (  -4.70%)
Amean     fault-both-12    11560.59 (   0.00%)    12155.50 (  -5.15%)
Amean     fault-both-18    16120.15 (   0.00%)    16445.96 (  -2.02%)
Amean     fault-both-24    19804.31 (   0.00%)    20465.03 (  -3.34%)
Amean     fault-both-30    25018.73 (   0.00%)    20813.54 *  16.81%*
Amean     fault-both-32    24380.19 (   0.00%)    22384.02 (   8.19%)

The impact on the scan rates is a mixed bag because this patch is very
sensitive to timing and whether the boost was active or not. However,
detailed tracing indicated that failure of migration due to a premature
ENOMEM triggered by watermark checks were eliminated.

Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
---
 mm/page_alloc.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Vlastimil Babka Dec. 18, 2018, 12:36 p.m. UTC | #1
On 12/15/18 12:03 AM, Mel Gorman wrote:
> When pageblocks get fragmented, watermarks are artifically boosted to pages
> are reclaimed to avoid further fragmentation events. However, compaction
> is often either fragmentation-neutral or moving movable pages away from
> unmovable/reclaimable pages. As the actual watermarks are preserved,
> allow compaction to ignore the boost factor.

Right, I should have realized that when reviewing the boost patch. I
think it would be useful to do the same change in
__compaction_suitable() as well. Compaction has its own "gap".

> 1-socket thpscale
>                                     4.20.0-rc6             4.20.0-rc6
>                                finishscan-v1r4           noboost-v1r4
> Amean     fault-both-1         0.00 (   0.00%)        0.00 *   0.00%*
> Amean     fault-both-3      3849.90 (   0.00%)     3753.53 (   2.50%)
> Amean     fault-both-5      5054.13 (   0.00%)     5396.32 (  -6.77%)
> Amean     fault-both-7      7061.77 (   0.00%)     7393.46 (  -4.70%)
> Amean     fault-both-12    11560.59 (   0.00%)    12155.50 (  -5.15%)
> Amean     fault-both-18    16120.15 (   0.00%)    16445.96 (  -2.02%)
> Amean     fault-both-24    19804.31 (   0.00%)    20465.03 (  -3.34%)
> Amean     fault-both-30    25018.73 (   0.00%)    20813.54 *  16.81%*
> Amean     fault-both-32    24380.19 (   0.00%)    22384.02 (   8.19%)
> 
> The impact on the scan rates is a mixed bag because this patch is very
> sensitive to timing and whether the boost was active or not. However,
> detailed tracing indicated that failure of migration due to a premature
> ENOMEM triggered by watermark checks were eliminated.
> 
> Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
> ---
>  mm/page_alloc.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 80535cd55a92..c7b80e62bfd9 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3043,7 +3043,7 @@ int __isolate_free_page(struct page *page, unsigned int order)
>  		 * watermark, because we already know our high-order page
>  		 * exists.
>  		 */
> -		watermark = min_wmark_pages(zone) + (1UL << order);
> +		watermark = zone->_watermark[WMARK_MIN] + (1UL << order);
>  		if (!zone_watermark_ok(zone, 0, watermark, 0, ALLOC_CMA))
>  			return 0;
>  
>
Mel Gorman Dec. 18, 2018, 1:51 p.m. UTC | #2
On Tue, Dec 18, 2018 at 01:36:42PM +0100, Vlastimil Babka wrote:
> On 12/15/18 12:03 AM, Mel Gorman wrote:
> > When pageblocks get fragmented, watermarks are artifically boosted to pages
> > are reclaimed to avoid further fragmentation events. However, compaction
> > is often either fragmentation-neutral or moving movable pages away from
> > unmovable/reclaimable pages. As the actual watermarks are preserved,
> > allow compaction to ignore the boost factor.
> 
> Right, I should have realized that when reviewing the boost patch. I
> think it would be useful to do the same change in
> __compaction_suitable() as well. Compaction has its own "gap".
> 

That gap is somewhat static though so I'm a bit more wary of it.  However,
the check in __isolate_free_page looks too agressive. We isolate in
units of COMPACT_CLUSTER_MAX yet the watermark check there is based on
the allocation request. That means for THP that we check if 512 pages
can be allocated when only somewhere between 1 and 32 is needed for that
compaction cycle to complete. Adjusting that might be more appropriate?
Vlastimil Babka Dec. 18, 2018, 1:58 p.m. UTC | #3
On 12/18/18 2:51 PM, Mel Gorman wrote:
> On Tue, Dec 18, 2018 at 01:36:42PM +0100, Vlastimil Babka wrote:
>> On 12/15/18 12:03 AM, Mel Gorman wrote:
>>> When pageblocks get fragmented, watermarks are artifically boosted to pages
>>> are reclaimed to avoid further fragmentation events. However, compaction
>>> is often either fragmentation-neutral or moving movable pages away from
>>> unmovable/reclaimable pages. As the actual watermarks are preserved,
>>> allow compaction to ignore the boost factor.
>>
>> Right, I should have realized that when reviewing the boost patch. I
>> think it would be useful to do the same change in
>> __compaction_suitable() as well. Compaction has its own "gap".
>>
> 
> That gap is somewhat static though so I'm a bit more wary of it. However,

Well, watermark boost is dynamic, but based on allocations stealing from
other migratetypes, not reflecting compaction chances of success.

> the check in __isolate_free_page looks too agressive. We isolate in
> units of COMPACT_CLUSTER_MAX yet the watermark check there is based on
> the allocation request. That means for THP that we check if 512 pages
> can be allocated when only somewhere between 1 and 32 is needed for that
> compaction cycle to complete. Adjusting that might be more appropriate?

AFAIU the code in __isolate_free_page() reflects that if there's less
than 512 free pages gap, we might form a high-order page for THP but
won't be able to allocate it afterwards due to watermark.
Mel Gorman Dec. 18, 2018, 2:29 p.m. UTC | #4
On Tue, Dec 18, 2018 at 02:58:33PM +0100, Vlastimil Babka wrote:
> On 12/18/18 2:51 PM, Mel Gorman wrote:
> > On Tue, Dec 18, 2018 at 01:36:42PM +0100, Vlastimil Babka wrote:
> >> On 12/15/18 12:03 AM, Mel Gorman wrote:
> >>> When pageblocks get fragmented, watermarks are artifically boosted to pages
> >>> are reclaimed to avoid further fragmentation events. However, compaction
> >>> is often either fragmentation-neutral or moving movable pages away from
> >>> unmovable/reclaimable pages. As the actual watermarks are preserved,
> >>> allow compaction to ignore the boost factor.
> >>
> >> Right, I should have realized that when reviewing the boost patch. I
> >> think it would be useful to do the same change in
> >> __compaction_suitable() as well. Compaction has its own "gap".
> >>
> > 
> > That gap is somewhat static though so I'm a bit more wary of it. However,
> 
> Well, watermark boost is dynamic, but based on allocations stealing from
> other migratetypes, not reflecting compaction chances of success.
> 

True.

> > the check in __isolate_free_page looks too agressive. We isolate in
> > units of COMPACT_CLUSTER_MAX yet the watermark check there is based on
> > the allocation request. That means for THP that we check if 512 pages
> > can be allocated when only somewhere between 1 and 32 is needed for that
> > compaction cycle to complete. Adjusting that might be more appropriate?
> 
> AFAIU the code in __isolate_free_page() reflects that if there's less
> than 512 free pages gap, we might form a high-order page for THP but
> won't be able to allocate it afterwards due to watermark.

Yeah but it used to be a lot more important when watermark checking for
high-orders was very different. Now, if the watermark is met for order-0
and a large enough free page is allocated, the allocation succeeds so
it's a lot less relevant than it used to be. kswapd will still run in
the background for order-0 if necessary so a heavy watermark check there
doesn't really help.
diff mbox series

Patch

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 80535cd55a92..c7b80e62bfd9 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3043,7 +3043,7 @@  int __isolate_free_page(struct page *page, unsigned int order)
 		 * watermark, because we already know our high-order page
 		 * exists.
 		 */
-		watermark = min_wmark_pages(zone) + (1UL << order);
+		watermark = zone->_watermark[WMARK_MIN] + (1UL << order);
 		if (!zone_watermark_ok(zone, 0, watermark, 0, ALLOC_CMA))
 			return 0;