diff mbox series

[v4,3/3] mm/compaction: optimize >0 order folio compaction with free page split.

Message ID 20240212163510.859822-4-zi.yan@sent.com (mailing list archive)
State New
Headers show
Series [v4,1/3] mm/compaction: enable compacting >0 order folios. | expand

Commit Message

Zi Yan Feb. 12, 2024, 4:35 p.m. UTC
From: Zi Yan <ziy@nvidia.com>

During migration in a memory compaction, free pages are placed in an array
of page lists based on their order.  But the desired free page order
(i.e., the order of a source page) might not be always present, thus
leading to migration failures and premature compaction termination.  Split
a high order free pages when source migration page has a lower order to
increase migration successful rate.

Note: merging free pages when a migration fails and a lower order free
page is returned via compaction_free() is possible, but there is too much
work.  Since the free pages are not buddy pages, it is hard to identify
these free pages using existing PFN-based page merging algorithm.

Signed-off-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Tested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: Adam Manzanares <a.manzanares@samsung.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Huang Ying <ying.huang@intel.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Kemeng Shi <shikemeng@huaweicloud.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Luis Chamberlain <mcgrof@kernel.org>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Vishal Moola (Oracle) <vishal.moola@gmail.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Yin Fengwei <fengwei.yin@intel.com>
Cc: Yu Zhao <yuzhao@google.com>
---
 mm/compaction.c | 36 +++++++++++++++++++++++++++++++-----
 1 file changed, 31 insertions(+), 5 deletions(-)

Comments

Yu Zhao Feb. 12, 2024, 6:27 p.m. UTC | #1
On Mon, Feb 12, 2024 at 9:35 AM Zi Yan <zi.yan@sent.com> wrote:
>
> From: Zi Yan <ziy@nvidia.com>
>
> During migration in a memory compaction, free pages are placed in an array
> of page lists based on their order.  But the desired free page order
> (i.e., the order of a source page) might not be always present, thus
> leading to migration failures and premature compaction termination.  Split
> a high order free pages when source migration page has a lower order to
> increase migration successful rate.
>
> Note: merging free pages when a migration fails and a lower order free
> page is returned via compaction_free() is possible, but there is too much
> work.  Since the free pages are not buddy pages, it is hard to identify
> these free pages using existing PFN-based page merging algorithm.
>
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> Tested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> Cc: Adam Manzanares <a.manzanares@samsung.com>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Huang Ying <ying.huang@intel.com>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Kemeng Shi <shikemeng@huaweicloud.com>
> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> Cc: Luis Chamberlain <mcgrof@kernel.org>
> Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Ryan Roberts <ryan.roberts@arm.com>
> Cc: Vishal Moola (Oracle) <vishal.moola@gmail.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Yin Fengwei <fengwei.yin@intel.com>
> Cc: Yu Zhao <yuzhao@google.com>
> ---
>  mm/compaction.c | 36 +++++++++++++++++++++++++++++++-----
>  1 file changed, 31 insertions(+), 5 deletions(-)
>
> diff --git a/mm/compaction.c b/mm/compaction.c
> index d0a05a621b67..25908e36b97c 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -1832,15 +1832,41 @@ static struct folio *compaction_alloc(struct folio *src, unsigned long data)
>         struct compact_control *cc = (struct compact_control *)data;
>         struct folio *dst;
>         int order = folio_order(src);
> +       bool has_isolated_pages = false;
> +       int start_order;
> +       struct page *freepage;
> +       unsigned long size;
> +
> +again:
> +       for (start_order = order; start_order < NR_PAGE_ORDERS; start_order++)
> +               if (!list_empty(&cc->freepages[start_order]))
> +                       break;
>
> -       if (list_empty(&cc->freepages[order])) {
> -               isolate_freepages(cc);
> -               if (list_empty(&cc->freepages[order]))
> +       /* no free pages in the list */
> +       if (start_order == NR_PAGE_ORDERS) {
> +               if (!has_isolated_pages) {
> +                       isolate_freepages(cc);
> +                       has_isolated_pages = true;
> +                       goto again;
> +               } else
>                         return NULL;

Nit: remove the "else" above, or just:

        if (has_isolated_pages)
                return NULL;
        isolate_freepages(cc);
        has_isolated_pages = true;
        goto again;
Zi Yan Feb. 12, 2024, 6:29 p.m. UTC | #2
On 12 Feb 2024, at 13:27, Yu Zhao wrote:

> On Mon, Feb 12, 2024 at 9:35 AM Zi Yan <zi.yan@sent.com> wrote:
>>
>> From: Zi Yan <ziy@nvidia.com>
>>
>> During migration in a memory compaction, free pages are placed in an array
>> of page lists based on their order.  But the desired free page order
>> (i.e., the order of a source page) might not be always present, thus
>> leading to migration failures and premature compaction termination.  Split
>> a high order free pages when source migration page has a lower order to
>> increase migration successful rate.
>>
>> Note: merging free pages when a migration fails and a lower order free
>> page is returned via compaction_free() is possible, but there is too much
>> work.  Since the free pages are not buddy pages, it is hard to identify
>> these free pages using existing PFN-based page merging algorithm.
>>
>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>> Tested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>> Cc: Adam Manzanares <a.manzanares@samsung.com>
>> Cc: David Hildenbrand <david@redhat.com>
>> Cc: Huang Ying <ying.huang@intel.com>
>> Cc: Johannes Weiner <hannes@cmpxchg.org>
>> Cc: Kemeng Shi <shikemeng@huaweicloud.com>
>> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
>> Cc: Luis Chamberlain <mcgrof@kernel.org>
>> Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
>> Cc: Mel Gorman <mgorman@techsingularity.net>
>> Cc: Ryan Roberts <ryan.roberts@arm.com>
>> Cc: Vishal Moola (Oracle) <vishal.moola@gmail.com>
>> Cc: Vlastimil Babka <vbabka@suse.cz>
>> Cc: Yin Fengwei <fengwei.yin@intel.com>
>> Cc: Yu Zhao <yuzhao@google.com>
>> ---
>>  mm/compaction.c | 36 +++++++++++++++++++++++++++++++-----
>>  1 file changed, 31 insertions(+), 5 deletions(-)
>>
>> diff --git a/mm/compaction.c b/mm/compaction.c
>> index d0a05a621b67..25908e36b97c 100644
>> --- a/mm/compaction.c
>> +++ b/mm/compaction.c
>> @@ -1832,15 +1832,41 @@ static struct folio *compaction_alloc(struct folio *src, unsigned long data)
>>         struct compact_control *cc = (struct compact_control *)data;
>>         struct folio *dst;
>>         int order = folio_order(src);
>> +       bool has_isolated_pages = false;
>> +       int start_order;
>> +       struct page *freepage;
>> +       unsigned long size;
>> +
>> +again:
>> +       for (start_order = order; start_order < NR_PAGE_ORDERS; start_order++)
>> +               if (!list_empty(&cc->freepages[start_order]))
>> +                       break;
>>
>> -       if (list_empty(&cc->freepages[order])) {
>> -               isolate_freepages(cc);
>> -               if (list_empty(&cc->freepages[order]))
>> +       /* no free pages in the list */
>> +       if (start_order == NR_PAGE_ORDERS) {
>> +               if (!has_isolated_pages) {
>> +                       isolate_freepages(cc);
>> +                       has_isolated_pages = true;
>> +                       goto again;
>> +               } else
>>                         return NULL;
>
> Nit: remove the "else" above, or just:
>
>         if (has_isolated_pages)
>                 return NULL;
>         isolate_freepages(cc);
>         has_isolated_pages = true;
>         goto again;

Will do. Thanks.

--
Best Regards,
Yan, Zi
diff mbox series

Patch

diff --git a/mm/compaction.c b/mm/compaction.c
index d0a05a621b67..25908e36b97c 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -1832,15 +1832,41 @@  static struct folio *compaction_alloc(struct folio *src, unsigned long data)
 	struct compact_control *cc = (struct compact_control *)data;
 	struct folio *dst;
 	int order = folio_order(src);
+	bool has_isolated_pages = false;
+	int start_order;
+	struct page *freepage;
+	unsigned long size;
+
+again:
+	for (start_order = order; start_order < NR_PAGE_ORDERS; start_order++)
+		if (!list_empty(&cc->freepages[start_order]))
+			break;
 
-	if (list_empty(&cc->freepages[order])) {
-		isolate_freepages(cc);
-		if (list_empty(&cc->freepages[order]))
+	/* no free pages in the list */
+	if (start_order == NR_PAGE_ORDERS) {
+		if (!has_isolated_pages) {
+			isolate_freepages(cc);
+			has_isolated_pages = true;
+			goto again;
+		} else
 			return NULL;
 	}
 
-	dst = list_first_entry(&cc->freepages[order], struct folio, lru);
-	list_del(&dst->lru);
+	freepage = list_first_entry(&cc->freepages[start_order], struct page,
+				lru);
+	size = 1 << start_order;
+
+	list_del(&freepage->lru);
+
+	while (start_order > order) {
+		start_order--;
+		size >>= 1;
+
+		list_add(&freepage[size].lru, &cc->freepages[start_order]);
+		set_page_private(&freepage[size], start_order);
+	}
+	dst = (struct folio *)freepage;
+
 	post_alloc_hook(&dst->page, order, __GFP_MOVABLE);
 	if (order)
 		prep_compound_page(&dst->page, order);