Message ID | 20230826153617.4019189-1-shikemeng@huaweicloud.com (mailing list archive) |
---|---|
Headers | show |
Series | Fixes and cleanups to compaction | expand |
On 8/26/2023 11:36 PM, Kemeng Shi wrote: > In strict mode, we should return 0 if there is any hole in pageblock. If > we successfully isolated pages at beginning at pageblock and then have a > bogus compound_order outside pageblock in next page. We will abort search > loop with blockpfn > end_pfn. Although we will limit blockpfn to end_pfn, > we will treat it as a successful isolation in strict mode as blockpfn is > not < end_pfn and return partial isolated pages. Then > isolate_freepages_range may success unexpectly with hole in isolated > range. > > Fixes: 9fcd6d2e052ee ("mm, compaction: skip compound pages by order in free scanner") > Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com> LGTM. Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> > --- > mm/compaction.c | 11 +++++++++-- > 1 file changed, 9 insertions(+), 2 deletions(-) > > diff --git a/mm/compaction.c b/mm/compaction.c > index a40550a33aee..b4d03c9ffe7c 100644 > --- a/mm/compaction.c > +++ b/mm/compaction.c > @@ -631,6 +631,14 @@ static unsigned long isolate_freepages_block(struct compact_control *cc, > page += (1UL << order) - 1; > nr_scanned += (1UL << order) - 1; > } > + /* > + * There is a tiny chance that we have read bogus > + * compound_order(), so be careful to not go outside > + * of the pageblock. > + */ > + if (unlikely(blockpfn >= end_pfn)) > + blockpfn = end_pfn - 1; > + > goto isolate_fail; > } > > @@ -678,8 +686,7 @@ static unsigned long isolate_freepages_block(struct compact_control *cc, > spin_unlock_irqrestore(&cc->zone->lock, flags); > > /* > - * There is a tiny chance that we have read bogus compound_order(), > - * so be careful to not go outside of the pageblock. > + * Be careful to not go outside of the pageblock. > */ > if (unlikely(blockpfn > end_pfn)) > blockpfn = end_pfn;
On 8/26/2023 11:36 PM, Kemeng Shi wrote: > We do proactive compaction with order == -1 via > 1. /proc/sys/vm/compact_memory > 2. /sys/devices/system/node/nodex/compact > 3. /proc/sys/vm/compaction_proactiveness > Add missed situation in which order == -1. > > Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> > --- > mm/compaction.c | 6 ++++-- > 1 file changed, 4 insertions(+), 2 deletions(-) > > diff --git a/mm/compaction.c b/mm/compaction.c > index 89a1b627bc89..00b7bba6c72e 100644 > --- a/mm/compaction.c > +++ b/mm/compaction.c > @@ -2061,8 +2061,10 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc) > } > > /* > - * order == -1 is expected when compacting via > - * /proc/sys/vm/compact_memory > + * order == -1 is expected when compacting proactively via > + * 1. /proc/sys/vm/compact_memory > + * 2. /sys/devices/system/node/nodex/compact > + * 3. /proc/sys/vm/compaction_proactiveness > */ > static inline bool is_via_compact_memory(int order) > {
On Sat, Aug 26, 2023 at 11:36:13PM +0800, Kemeng Shi wrote: > In strict mode, we should return 0 if there is any hole in pageblock. If > we successfully isolated pages at beginning at pageblock and then have a > bogus compound_order outside pageblock in next page. We will abort search > loop with blockpfn > end_pfn. Although we will limit blockpfn to end_pfn, > we will treat it as a successful isolation in strict mode as blockpfn is > not < end_pfn and return partial isolated pages. Then > isolate_freepages_range may success unexpectly with hole in isolated > range. > > Fixes: 9fcd6d2e052ee ("mm, compaction: skip compound pages by order in free scanner") > Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com> Modify the (likely(order <= MAX_ORDER)) block to avoid ever updating blockpfn past the end of the pageblock. Then remove the second redundant check.
On Sat, Aug 26, 2023 at 11:36:14PM +0800, Kemeng Shi wrote: > We call isolate_freepages_block in strict mode, continuous pages in > pageblock will be isolated if isolate_freepages_block successed. > Then pfn + isolated will point to start of next pageblock to scan > no matter how many pageblocks are isolated in isolate_freepages_block. > Use pfn + isolated as start of next pageblock to scan to simplify the > iteration. > > The pfn + isolated always points to start of next pageblock as: > In case isolated buddy page has order higher than pageblock: > 1. page in buddy page is aligned with it's order > 2. order of page is higher than pageblock order > Then page is aligned with pageblock order. So pfn of page and isolated > pages count are both aligned pageblock order. So pfn + isolated is > pageblock order aligned. > > In case isolated buddy page has order lower than pageblock: > Buddy page with order N contains two order N - 1 pages as following: > | order N | > |order N - 1|order N - 1| > So buddy pages with order N - 1 will never cross boudary of order N. > Similar, buddy pages with order N - 2 will never cross boudary of order > N - 1 and so on. Then any pages with order less than pageblock order > will never crosa boudary of pageblock. > > Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com> > Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> While I don't think the patch is wrong, I also don't think it meaningfully simplifies the code or optimises enough to be justified. Even though a branch is eliminated, the whole path is not cheap.
On Sat, Aug 26, 2023 at 11:36:15PM +0800, Kemeng Shi wrote: > We have compact_blockskip_flush check in __reset_isolation_suitable, just > remove repeat check before __reset_isolation_suitable in > compact_blockskip_flush. > > Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com> > Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> The comment should move to __reset_isolation_suitable but otherwise Acked-by: Mel Gorman <mgorman@techsingularity.net> As a complete aside, the reset_isolation_suitable and __reset_isolation_suitable were badly named because locking is not involved but it's meaningless churn to fix it.
On Sat, Aug 26, 2023 at 11:36:16PM +0800, Kemeng Shi wrote: > We do proactive compaction with order == -1 via > 1. /proc/sys/vm/compact_memory > 2. /sys/devices/system/node/nodex/compact > 3. /proc/sys/vm/compaction_proactiveness > Add missed situation in which order == -1. > > Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com> Acked-by: Mel Gorman <mgorman@techsingularity.net>
on 8/29/2023 5:39 PM, Mel Gorman wrote: > On Sat, Aug 26, 2023 at 11:36:13PM +0800, Kemeng Shi wrote: >> In strict mode, we should return 0 if there is any hole in pageblock. If >> we successfully isolated pages at beginning at pageblock and then have a >> bogus compound_order outside pageblock in next page. We will abort search >> loop with blockpfn > end_pfn. Although we will limit blockpfn to end_pfn, >> we will treat it as a successful isolation in strict mode as blockpfn is >> not < end_pfn and return partial isolated pages. Then >> isolate_freepages_range may success unexpectly with hole in isolated >> range. >> >> Fixes: 9fcd6d2e052ee ("mm, compaction: skip compound pages by order in free scanner") >> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com> > > Modify the (likely(order <= MAX_ORDER)) block to avoid ever updating > blockpfn past the end of the pageblock. Then remove the second redundant > check. > Sure, I will improve this in next version.
on 8/29/2023 11:01 PM, Mel Gorman wrote: > On Sat, Aug 26, 2023 at 11:36:14PM +0800, Kemeng Shi wrote: >> We call isolate_freepages_block in strict mode, continuous pages in >> pageblock will be isolated if isolate_freepages_block successed. >> Then pfn + isolated will point to start of next pageblock to scan >> no matter how many pageblocks are isolated in isolate_freepages_block. >> Use pfn + isolated as start of next pageblock to scan to simplify the >> iteration. >> >> The pfn + isolated always points to start of next pageblock as: >> In case isolated buddy page has order higher than pageblock: >> 1. page in buddy page is aligned with it's order >> 2. order of page is higher than pageblock order >> Then page is aligned with pageblock order. So pfn of page and isolated >> pages count are both aligned pageblock order. So pfn + isolated is >> pageblock order aligned. >> >> In case isolated buddy page has order lower than pageblock: >> Buddy page with order N contains two order N - 1 pages as following: >> | order N | >> |order N - 1|order N - 1| >> So buddy pages with order N - 1 will never cross boudary of order N. >> Similar, buddy pages with order N - 2 will never cross boudary of order >> N - 1 and so on. Then any pages with order less than pageblock order >> will never crosa boudary of pageblock. >> >> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com> >> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> > > While I don't think the patch is wrong, I also don't think it > meaningfully simplifies the code or optimises enough to be justified. > Even though a branch is eliminated, the whole path is not cheap. > OK, I will drop this in next version if you insistant.
on 8/29/2023 11:05 PM, Mel Gorman wrote: > On Sat, Aug 26, 2023 at 11:36:15PM +0800, Kemeng Shi wrote: >> We have compact_blockskip_flush check in __reset_isolation_suitable, just >> remove repeat check before __reset_isolation_suitable in >> compact_blockskip_flush. >> >> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com> >> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> > > The comment should move to __reset_isolation_suitable but otherwise > Thanks for the review, I will move comment in next version. > Acked-by: Mel Gorman <mgorman@techsingularity.net> > > As a complete aside, the reset_isolation_suitable and > __reset_isolation_suitable were badly named because locking is not > involved but it's meaningless churn to fix it. >