Message ID | 20220215145111.27082-3-mgorman@techsingularity.net (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Follow-up on high-order PCP caching | expand |
On 2/15/22 15:51, Mel Gorman wrote: > free_pcppages_bulk() frees pages in a round-robin fashion. Originally, > this was dealing only with migratetypes but storing high-order pages > means that there can be many more empty lists that are uselessly > checked. Track the minimum and maximum active pindex to reduce the > search space. > > Signed-off-by: Mel Gorman <mgorman@techsingularity.net> > --- > mm/page_alloc.c | 13 +++++++++++-- > 1 file changed, 11 insertions(+), 2 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 08de32cfd9bb..c5110fdeb115 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -1450,6 +1450,8 @@ static void free_pcppages_bulk(struct zone *zone, int count, > struct per_cpu_pages *pcp) > { > int pindex = 0; > + int min_pindex = 0; > + int max_pindex = NR_PCP_LISTS - 1; > int batch_free = 0; > int nr_freed = 0; > unsigned int order; > @@ -1478,10 +1480,17 @@ static void free_pcppages_bulk(struct zone *zone, int count, > if (++pindex == NR_PCP_LISTS) Hmm, so in the very first iteration at this point pindex is already 1. This looks odd even before the patch, as order 0 MIGRATE_UNMOVABLE list is only processed after all the higher orders? > pindex = 0; Also shouldn't this wrap-around check also use min_index/max_index instead of NR_PCP_LISTS and 0? > list = &pcp->lists[pindex]; > - } while (list_empty(list)); > + if (!list_empty(list)) > + break; > + > + if (pindex == max_pindex) > + max_pindex--; > + if (pindex == min_pindex) So with pindex 1 and min_pindex == 0 this will not trigger until (eventually) the first pindex wrap around, which seems suboptimal. But I can see the later patches change things substantially anyway so it may be moot... > + min_pindex++; > + } while (1); > > /* This is the only non-empty list. Free them all. */ > - if (batch_free == NR_PCP_LISTS) > + if (batch_free >= max_pindex - min_pindex) > batch_free = count; > > order = pindex_to_order(pindex);
On Wed, Feb 16, 2022 at 01:02:01PM +0100, Vlastimil Babka wrote: > On 2/15/22 15:51, Mel Gorman wrote: > > free_pcppages_bulk() frees pages in a round-robin fashion. Originally, > > this was dealing only with migratetypes but storing high-order pages > > means that there can be many more empty lists that are uselessly > > checked. Track the minimum and maximum active pindex to reduce the > > search space. > > > > Signed-off-by: Mel Gorman <mgorman@techsingularity.net> > > --- > > mm/page_alloc.c | 13 +++++++++++-- > > 1 file changed, 11 insertions(+), 2 deletions(-) > > > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > index 08de32cfd9bb..c5110fdeb115 100644 > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -1450,6 +1450,8 @@ static void free_pcppages_bulk(struct zone *zone, int count, > > struct per_cpu_pages *pcp) > > { > > int pindex = 0; > > + int min_pindex = 0; > > + int max_pindex = NR_PCP_LISTS - 1; > > int batch_free = 0; > > int nr_freed = 0; > > unsigned int order; > > @@ -1478,10 +1480,17 @@ static void free_pcppages_bulk(struct zone *zone, int count, > > if (++pindex == NR_PCP_LISTS) > > Hmm, so in the very first iteration at this point pindex is already 1. This > looks odd even before the patch, as order 0 MIGRATE_UNMOVABLE list is only > processed after all the higher orders? > Yes and this was the behaviour before and after. I don't recall why. It might have been to preserve UNMOVABLE pages but after the series is finished, the reasoning is weak. I'll add a specific check. > > pindex = 0; > > Also shouldn't this wrap-around check also use min_index/max_index instead > of NR_PCP_LISTS and 0? > Yes, it should and it's a rebasing error from an earlier prototype that I missed. I'll fix it. > > list = &pcp->lists[pindex]; > > - } while (list_empty(list)); > > + if (!list_empty(list)) > > + break; > > + > > + if (pindex == max_pindex) > > + max_pindex--; > > + if (pindex == min_pindex) > > So with pindex 1 and min_pindex == 0 this will not trigger until > (eventually) the first pindex wrap around, which seems suboptimal. But I can > see the later patches change things substantially anyway so it may be moot... > It could potentially be more optimal but at the cost of complexity which I wanted to avoid in this path as much as possible. Initialising min_pindex == pindex could result in an infinite loop if the lower lists need to be cleared.
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 08de32cfd9bb..c5110fdeb115 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1450,6 +1450,8 @@ static void free_pcppages_bulk(struct zone *zone, int count, struct per_cpu_pages *pcp) { int pindex = 0; + int min_pindex = 0; + int max_pindex = NR_PCP_LISTS - 1; int batch_free = 0; int nr_freed = 0; unsigned int order; @@ -1478,10 +1480,17 @@ static void free_pcppages_bulk(struct zone *zone, int count, if (++pindex == NR_PCP_LISTS) pindex = 0; list = &pcp->lists[pindex]; - } while (list_empty(list)); + if (!list_empty(list)) + break; + + if (pindex == max_pindex) + max_pindex--; + if (pindex == min_pindex) + min_pindex++; + } while (1); /* This is the only non-empty list. Free them all. */ - if (batch_free == NR_PCP_LISTS) + if (batch_free >= max_pindex - min_pindex) batch_free = count; order = pindex_to_order(pindex);
free_pcppages_bulk() frees pages in a round-robin fashion. Originally, this was dealing only with migratetypes but storing high-order pages means that there can be many more empty lists that are uselessly checked. Track the minimum and maximum active pindex to reduce the search space. Signed-off-by: Mel Gorman <mgorman@techsingularity.net> --- mm/page_alloc.c | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-)