Message ID | 20230802093741.2333325-1-shikemeng@huaweicloud.com (mailing list archive) |
---|---|
Headers | show |
Series | Fixes and cleanups to compaction | expand |
On 02.08.23 11:37, Kemeng Shi wrote: > skip_offline_sections_reverse will return the last pfn in found online > section. Then we set block_start_pfn to start of page block which > contains the last pfn in section. Then we continue, move one page > block forward and ignore the last page block in the online section. > Make block_start_pfn point to first page block after online section to fix > this: > 1. make skip_offline_sections_reverse return end pfn of online section, > i.e. pfn of page block after online section. > 2. assign block_start_pfn with next_pfn. > > Fixes: f63224525309 ("mm: compaction: skip the memory hole rapidly when isolating free pages") > Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com> > --- > mm/compaction.c | 10 +++++++--- > 1 file changed, 7 insertions(+), 3 deletions(-) > > diff --git a/mm/compaction.c b/mm/compaction.c > index cd23da4d2a5b..a8cea916df9d 100644 > --- a/mm/compaction.c > +++ b/mm/compaction.c > @@ -250,6 +250,11 @@ static unsigned long skip_offline_sections(unsigned long start_pfn) > return 0; > } > > +/* > + * If the PFN falls into an offline section, return the end PFN of the > + * next online section in reverse. If the PFN falls into an online section > + * or if there is no next online section in reverse, return 0. > + */ > static unsigned long skip_offline_sections_reverse(unsigned long start_pfn) > { > unsigned long start_nr = pfn_to_section_nr(start_pfn); > @@ -259,7 +264,7 @@ static unsigned long skip_offline_sections_reverse(unsigned long start_pfn) > > while (start_nr-- > 0) { > if (online_section_nr(start_nr)) > - return section_nr_to_pfn(start_nr) + PAGES_PER_SECTION - 1; > + return section_nr_to_pfn(start_nr) + PAGES_PER_SECTION; > } > > return 0; > @@ -1668,8 +1673,7 @@ static void isolate_freepages(struct compact_control *cc) > > next_pfn = skip_offline_sections_reverse(block_start_pfn); > if (next_pfn) > - block_start_pfn = max(pageblock_start_pfn(next_pfn), > - low_pfn); > + block_start_pfn = max(next_pfn, low_pfn); > > continue; > } Acked-by: David Hildenbrand <david@redhat.com>
On 8/2/2023 5:37 PM, Kemeng Shi wrote: > skip_offline_sections_reverse will return the last pfn in found online > section. Then we set block_start_pfn to start of page block which > contains the last pfn in section. Then we continue, move one page > block forward and ignore the last page block in the online section. > Make block_start_pfn point to first page block after online section to fix > this: > 1. make skip_offline_sections_reverse return end pfn of online section, > i.e. pfn of page block after online section. > 2. assign block_start_pfn with next_pfn. > > Fixes: f63224525309 ("mm: compaction: skip the memory hole rapidly when isolating free pages") The changes look good to me. But the commit id is not stable, since it is not merged into mm-stable branch yet. Not sure how to handle this patch, squash it into the original patch? Andrew, what do you prefer? > Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com> > --- > mm/compaction.c | 10 +++++++--- > 1 file changed, 7 insertions(+), 3 deletions(-) > > diff --git a/mm/compaction.c b/mm/compaction.c > index cd23da4d2a5b..a8cea916df9d 100644 > --- a/mm/compaction.c > +++ b/mm/compaction.c > @@ -250,6 +250,11 @@ static unsigned long skip_offline_sections(unsigned long start_pfn) > return 0; > } > > +/* > + * If the PFN falls into an offline section, return the end PFN of the > + * next online section in reverse. If the PFN falls into an online section > + * or if there is no next online section in reverse, return 0. > + */ > static unsigned long skip_offline_sections_reverse(unsigned long start_pfn) > { > unsigned long start_nr = pfn_to_section_nr(start_pfn); > @@ -259,7 +264,7 @@ static unsigned long skip_offline_sections_reverse(unsigned long start_pfn) > > while (start_nr-- > 0) { > if (online_section_nr(start_nr)) > - return section_nr_to_pfn(start_nr) + PAGES_PER_SECTION - 1; > + return section_nr_to_pfn(start_nr) + PAGES_PER_SECTION; > } > > return 0; > @@ -1668,8 +1673,7 @@ static void isolate_freepages(struct compact_control *cc) > > next_pfn = skip_offline_sections_reverse(block_start_pfn); > if (next_pfn) > - block_start_pfn = max(pageblock_start_pfn(next_pfn), > - low_pfn); > + block_start_pfn = max(next_pfn, low_pfn); > > continue; > }