Message ID | 1601445649-22163-1-git-send-email-anshuman.khandual@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [RFC,V2] mm/vmstat: Add events for HugeTLB migration | expand |
On Wed, Sep 30, 2020 at 11:30:49AM +0530, Anshuman Khandual wrote: > - is_thp = PageTransHuge(page) && !PageHuge(page); > - nr_subpages = thp_nr_pages(page); > + is_thp = false; > + is_hugetlb = false; > + if (PageTransHuge(page)) { > + if (PageHuge(page)) > + is_hugetlb = true; > + else > + is_thp = true; > + } Since PageHuge only returns true for hugetlb pages, I think the following is more simple? if (PageHuge(page)) is_hugetlb = true; else if (PageTransHuge(page)) is_thp = true Besides that, it looks good to me: Reviewed-by: Oscar Salvador <osalvador@suse.de>
On 09/30/2020 01:16 PM, Oscar Salvador wrote: > On Wed, Sep 30, 2020 at 11:30:49AM +0530, Anshuman Khandual wrote: >> - is_thp = PageTransHuge(page) && !PageHuge(page); >> - nr_subpages = thp_nr_pages(page); >> + is_thp = false; >> + is_hugetlb = false; >> + if (PageTransHuge(page)) { >> + if (PageHuge(page)) >> + is_hugetlb = true; >> + else >> + is_thp = true; >> + } > > Since PageHuge only returns true for hugetlb pages, I think the following is > more simple? > > if (PageHuge(page)) > is_hugetlb = true; > else if (PageTransHuge(page)) > is_thp = true Right, it would be simple. But as Mike had mentioned before PageHuge() check is more expensive than PageTransHuge(). This proposal just tries not to call PageHuge() unless the page first clears PageTransHuge(), saving some potential CPU cycles on normal pages. > > > Besides that, it looks good to me: > > Reviewed-by: Oscar Salvador <osalvador@suse.de> >
On Wed, Sep 30, 2020 at 03:32:10PM +0530, Anshuman Khandual wrote: > Right, it would be simple. But as Mike had mentioned before PageHuge() > check is more expensive than PageTransHuge(). This proposal just tries > not to call PageHuge() unless the page first clears PageTransHuge(), > saving some potential CPU cycles on normal pages. Ah, I remember now. I missed that, sorry.
On 9/29/20 11:00 PM, Anshuman Khandual wrote: > Add following new vmstat events which will track HugeTLB page migration. > > 1. HUGETLB_MIGRATION_SUCCESS > 2. HUGETLB_MIGRATION_FAILURE > > It follows the existing semantics to accommodate HugeTLB subpages in total > page migration statistics. While here, this updates current trace event > 'mm_migrate_pages' to accommodate now available HugeTLB based statistics. > > Cc: Daniel Jordan <daniel.m.jordan@oracle.com> > Cc: Zi Yan <ziy@nvidia.com> > Cc: John Hubbard <jhubbard@nvidia.com> > Cc: Mike Kravetz <mike.kravetz@oracle.com> > Cc: Oscar Salvador <osalvador@suse.de> > Cc: Andrew Morton <akpm@linux-foundation.org> > Cc: linux-mm@kvack.org > Cc: linux-kernel@vger.kernel.org > Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Thanks, Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
On Wed 30-09-20 11:30:49, Anshuman Khandual wrote: > Add following new vmstat events which will track HugeTLB page migration. > > 1. HUGETLB_MIGRATION_SUCCESS > 2. HUGETLB_MIGRATION_FAILURE > > It follows the existing semantics to accommodate HugeTLB subpages in total > page migration statistics. While here, this updates current trace event > 'mm_migrate_pages' to accommodate now available HugeTLB based statistics. What is the actual usecase? And how do you deal with the complexity introduced by many different hugetlb page sizes. Really, what is the point to having such a detailed view on hugetlb migration? > > Cc: Daniel Jordan <daniel.m.jordan@oracle.com> > Cc: Zi Yan <ziy@nvidia.com> > Cc: John Hubbard <jhubbard@nvidia.com> > Cc: Mike Kravetz <mike.kravetz@oracle.com> > Cc: Oscar Salvador <osalvador@suse.de> > Cc: Andrew Morton <akpm@linux-foundation.org> > Cc: linux-mm@kvack.org > Cc: linux-kernel@vger.kernel.org > Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> > --- > Applies on linux-next and v5.9-rc7. > > Changes in RFC V2: > > - Added the missing hugetlb_retry in the loop per Oscar > - Changed HugeTLB and THP detection sequence per Mike > - Changed nr_subpages fetch from compound_nr() instead per Mike > > Changes in RFC V1: (https://patchwork.kernel.org/patch/11799395/) > > include/linux/vm_event_item.h | 2 ++ > include/trace/events/migrate.h | 13 ++++++--- > mm/migrate.c | 48 +++++++++++++++++++++++++++++----- > mm/vmstat.c | 2 ++ > 4 files changed, 56 insertions(+), 9 deletions(-) > > diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h > index 18e75974d4e3..d1ddad835c19 100644 > --- a/include/linux/vm_event_item.h > +++ b/include/linux/vm_event_item.h > @@ -60,6 +60,8 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT, > THP_MIGRATION_SUCCESS, > THP_MIGRATION_FAIL, > THP_MIGRATION_SPLIT, > + HUGETLB_MIGRATION_SUCCESS, > + HUGETLB_MIGRATION_FAIL, > #endif > #ifdef CONFIG_COMPACTION > COMPACTMIGRATE_SCANNED, COMPACTFREE_SCANNED, > diff --git a/include/trace/events/migrate.h b/include/trace/events/migrate.h > index 4d434398d64d..f8ffb8aece48 100644 > --- a/include/trace/events/migrate.h > +++ b/include/trace/events/migrate.h > @@ -47,10 +47,11 @@ TRACE_EVENT(mm_migrate_pages, > > TP_PROTO(unsigned long succeeded, unsigned long failed, > unsigned long thp_succeeded, unsigned long thp_failed, > - unsigned long thp_split, enum migrate_mode mode, int reason), > + unsigned long thp_split, unsigned long hugetlb_succeeded, > + unsigned long hugetlb_failed, enum migrate_mode mode, int reason), > > TP_ARGS(succeeded, failed, thp_succeeded, thp_failed, > - thp_split, mode, reason), > + thp_split, hugetlb_succeeded, hugetlb_failed, mode, reason), > > TP_STRUCT__entry( > __field( unsigned long, succeeded) > @@ -58,6 +59,8 @@ TRACE_EVENT(mm_migrate_pages, > __field( unsigned long, thp_succeeded) > __field( unsigned long, thp_failed) > __field( unsigned long, thp_split) > + __field( unsigned long, hugetlb_succeeded) > + __field( unsigned long, hugetlb_failed) > __field( enum migrate_mode, mode) > __field( int, reason) > ), > @@ -68,16 +71,20 @@ TRACE_EVENT(mm_migrate_pages, > __entry->thp_succeeded = thp_succeeded; > __entry->thp_failed = thp_failed; > __entry->thp_split = thp_split; > + __entry->hugetlb_succeeded = hugetlb_succeeded; > + __entry->hugetlb_failed = hugetlb_failed; > __entry->mode = mode; > __entry->reason = reason; > ), > > - TP_printk("nr_succeeded=%lu nr_failed=%lu nr_thp_succeeded=%lu nr_thp_failed=%lu nr_thp_split=%lu mode=%s reason=%s", > + TP_printk("nr_succeeded=%lu nr_failed=%lu nr_thp_succeeded=%lu nr_thp_failed=%lu nr_thp_split=%lu nr_hugetlb_succeeded=%lu nr_hugetlb_failed=%lu mode=%s reason=%s", > __entry->succeeded, > __entry->failed, > __entry->thp_succeeded, > __entry->thp_failed, > __entry->thp_split, > + __entry->hugetlb_succeeded, > + __entry->hugetlb_failed, > __print_symbolic(__entry->mode, MIGRATE_MODE), > __print_symbolic(__entry->reason, MIGRATE_REASON)) > ); > diff --git a/mm/migrate.c b/mm/migrate.c > index 5ca5842df5db..0aac9d39778c 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -1415,13 +1415,17 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, > { > int retry = 1; > int thp_retry = 1; > + int hugetlb_retry = 1; > int nr_failed = 0; > int nr_succeeded = 0; > int nr_thp_succeeded = 0; > int nr_thp_failed = 0; > int nr_thp_split = 0; > + int nr_hugetlb_succeeded = 0; > + int nr_hugetlb_failed = 0; > int pass = 0; > bool is_thp = false; > + bool is_hugetlb = false; > struct page *page; > struct page *page2; > int swapwrite = current->flags & PF_SWAPWRITE; > @@ -1430,9 +1434,10 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, > if (!swapwrite) > current->flags |= PF_SWAPWRITE; > > - for (pass = 0; pass < 10 && (retry || thp_retry); pass++) { > + for (pass = 0; pass < 10 && (retry || thp_retry || hugetlb_retry); pass++) { > retry = 0; > thp_retry = 0; > + hugetlb_retry = 0; > > list_for_each_entry_safe(page, page2, from, lru) { > retry: > @@ -1441,11 +1446,19 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, > * Capture required information that might get lost > * during migration. > */ > - is_thp = PageTransHuge(page) && !PageHuge(page); > - nr_subpages = thp_nr_pages(page); > + is_thp = false; > + is_hugetlb = false; > + if (PageTransHuge(page)) { > + if (PageHuge(page)) > + is_hugetlb = true; > + else > + is_thp = true; > + } > + nr_subpages = compound_nr(page); > + > cond_resched(); > > - if (PageHuge(page)) > + if (is_hugetlb) > rc = unmap_and_move_huge_page(get_new_page, > put_new_page, private, page, > pass > 2, mode, reason); > @@ -1481,6 +1494,11 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, > nr_failed += nr_subpages; > goto out; > } > + if (is_hugetlb) { > + nr_hugetlb_failed++; > + nr_failed += nr_subpages; > + goto out; > + } > nr_failed++; > goto out; > case -EAGAIN: > @@ -1488,6 +1506,10 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, > thp_retry++; > break; > } > + if (is_hugetlb) { > + hugetlb_retry++; > + break; > + } > retry++; > break; > case MIGRATEPAGE_SUCCESS: > @@ -1496,6 +1518,11 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, > nr_succeeded += nr_subpages; > break; > } > + if (is_hugetlb) { > + nr_hugetlb_succeeded++; > + nr_succeeded += nr_subpages; > + break; > + } > nr_succeeded++; > break; > default: > @@ -1510,13 +1537,19 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, > nr_failed += nr_subpages; > break; > } > + if (is_hugetlb) { > + nr_hugetlb_failed++; > + nr_failed += nr_subpages; > + break; > + } > nr_failed++; > break; > } > } > } > - nr_failed += retry + thp_retry; > + nr_failed += retry + thp_retry + hugetlb_retry; > nr_thp_failed += thp_retry; > + nr_hugetlb_failed += hugetlb_retry; > rc = nr_failed; > out: > count_vm_events(PGMIGRATE_SUCCESS, nr_succeeded); > @@ -1524,8 +1557,11 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, > count_vm_events(THP_MIGRATION_SUCCESS, nr_thp_succeeded); > count_vm_events(THP_MIGRATION_FAIL, nr_thp_failed); > count_vm_events(THP_MIGRATION_SPLIT, nr_thp_split); > + count_vm_events(HUGETLB_MIGRATION_SUCCESS, nr_hugetlb_succeeded); > + count_vm_events(HUGETLB_MIGRATION_FAIL, nr_hugetlb_failed); > trace_mm_migrate_pages(nr_succeeded, nr_failed, nr_thp_succeeded, > - nr_thp_failed, nr_thp_split, mode, reason); > + nr_thp_failed, nr_thp_split, nr_hugetlb_succeeded, > + nr_hugetlb_failed, mode, reason); > > if (!swapwrite) > current->flags &= ~PF_SWAPWRITE; > diff --git a/mm/vmstat.c b/mm/vmstat.c > index 79e5cd0abd0e..12fd35ba135f 100644 > --- a/mm/vmstat.c > +++ b/mm/vmstat.c > @@ -1286,6 +1286,8 @@ const char * const vmstat_text[] = { > "thp_migration_success", > "thp_migration_fail", > "thp_migration_split", > + "hugetlb_migration_success", > + "hugetlb_migration_fail", > #endif > #ifdef CONFIG_COMPACTION > "compact_migrate_scanned", > -- > 2.20.1 >
On 10/02/2020 05:34 PM, Michal Hocko wrote: > On Wed 30-09-20 11:30:49, Anshuman Khandual wrote: >> Add following new vmstat events which will track HugeTLB page migration. >> >> 1. HUGETLB_MIGRATION_SUCCESS >> 2. HUGETLB_MIGRATION_FAILURE >> >> It follows the existing semantics to accommodate HugeTLB subpages in total >> page migration statistics. While here, this updates current trace event >> 'mm_migrate_pages' to accommodate now available HugeTLB based statistics. > What is the actual usecase? And how do you deal with the complexity > introduced by many different hugetlb page sizes. Really, what is the > point to having such a detailed view on hugetlb migration? > It helps differentiate various page migration events i.e normal, THP and HugeTLB and gives us more reliable and accurate measurement. Current stats as per PGMIGRATE_SUCCESS and PGMIGRATE_FAIL are misleading, as they contain both normal and HugeTLB pages as single entities, which is not accurate. After this change, PGMIGRATE_SUCCESS and PGMIGRATE_FAIL will contain page migration statistics in terms of normal pages irrespective of whether any previous migrations until that point involved normal pages, THP or HugeTLB (any size) pages. At the least, this fixes existing misleading stats with PGMIGRATE_SUCCESS and PGMIGRATE_FAIL. Besides, it helps us understand HugeTLB migrations in more detail. Even though HugeTLB can be of various sizes on a given platform, these new stats HUGETLB_MIGRATION_SUCCESS and HUGETLB_MIGRATION_FAILURE give enough overall insight into HugeTLB migration events. Though these new stats accumulate HugeTLB migration success and failure irrespective of their size, it will still be helpful as HugeTLB is user driven, who should be able to decipher these accumulated stats. But this is a limitation, as it might be difficult to determine available HugeTLB sizes at compile time.
On Mon 05-10-20 07:59:12, Anshuman Khandual wrote: > > > On 10/02/2020 05:34 PM, Michal Hocko wrote: > > On Wed 30-09-20 11:30:49, Anshuman Khandual wrote: > >> Add following new vmstat events which will track HugeTLB page migration. > >> > >> 1. HUGETLB_MIGRATION_SUCCESS > >> 2. HUGETLB_MIGRATION_FAILURE > >> > >> It follows the existing semantics to accommodate HugeTLB subpages in total > >> page migration statistics. While here, this updates current trace event > >> 'mm_migrate_pages' to accommodate now available HugeTLB based statistics. > > What is the actual usecase? And how do you deal with the complexity > > introduced by many different hugetlb page sizes. Really, what is the > > point to having such a detailed view on hugetlb migration? > > > > It helps differentiate various page migration events i.e normal, THP and > HugeTLB and gives us more reliable and accurate measurement. Current stats > as per PGMIGRATE_SUCCESS and PGMIGRATE_FAIL are misleading, as they contain > both normal and HugeTLB pages as single entities, which is not accurate. Yes this is true. But why does it really matter? Do you have a specific example? > After this change, PGMIGRATE_SUCCESS and PGMIGRATE_FAIL will contain page > migration statistics in terms of normal pages irrespective of whether any > previous migrations until that point involved normal pages, THP or HugeTLB > (any size) pages. At the least, this fixes existing misleading stats with > PGMIGRATE_SUCCESS and PGMIGRATE_FAIL. > > Besides, it helps us understand HugeTLB migrations in more detail. Even > though HugeTLB can be of various sizes on a given platform, these new > stats HUGETLB_MIGRATION_SUCCESS and HUGETLB_MIGRATION_FAILURE give enough > overall insight into HugeTLB migration events. While true this all is way too vague to add yet another imprecise counter.
On 10/05/2020 11:35 AM, Michal Hocko wrote: > On Mon 05-10-20 07:59:12, Anshuman Khandual wrote: >> >> >> On 10/02/2020 05:34 PM, Michal Hocko wrote: >>> On Wed 30-09-20 11:30:49, Anshuman Khandual wrote: >>>> Add following new vmstat events which will track HugeTLB page migration. >>>> >>>> 1. HUGETLB_MIGRATION_SUCCESS >>>> 2. HUGETLB_MIGRATION_FAILURE >>>> >>>> It follows the existing semantics to accommodate HugeTLB subpages in total >>>> page migration statistics. While here, this updates current trace event >>>> 'mm_migrate_pages' to accommodate now available HugeTLB based statistics. >>> What is the actual usecase? And how do you deal with the complexity >>> introduced by many different hugetlb page sizes. Really, what is the >>> point to having such a detailed view on hugetlb migration? >>> >> >> It helps differentiate various page migration events i.e normal, THP and >> HugeTLB and gives us more reliable and accurate measurement. Current stats >> as per PGMIGRATE_SUCCESS and PGMIGRATE_FAIL are misleading, as they contain >> both normal and HugeTLB pages as single entities, which is not accurate. > > Yes this is true. But why does it really matter? Do you have a specific > example? An example which demonstrates that mixing and misrepresentation in these stats create some problem ? Well, we could just create one scenario via an application with different VMA types and triggering some migrations. But the fact remains, that these stats are inaccurate and misleading which is very clear and apparent. > >> After this change, PGMIGRATE_SUCCESS and PGMIGRATE_FAIL will contain page >> migration statistics in terms of normal pages irrespective of whether any >> previous migrations until that point involved normal pages, THP or HugeTLB >> (any size) pages. At the least, this fixes existing misleading stats with >> PGMIGRATE_SUCCESS and PGMIGRATE_FAIL. >> >> Besides, it helps us understand HugeTLB migrations in more detail. Even >> though HugeTLB can be of various sizes on a given platform, these new >> stats HUGETLB_MIGRATION_SUCCESS and HUGETLB_MIGRATION_FAILURE give enough >> overall insight into HugeTLB migration events. > > While true this all is way too vague to add yet another imprecise > counter. Given that user knows about all HugeTLB mappings it has got, these counters are not really vague and could easily be related back. Moreover this change completes the migration stats restructuring which was started with adding THP counters. Otherwise it remains incomplete.
On Tue 06-10-20 08:26:35, Anshuman Khandual wrote: > > > On 10/05/2020 11:35 AM, Michal Hocko wrote: > > On Mon 05-10-20 07:59:12, Anshuman Khandual wrote: > >> > >> > >> On 10/02/2020 05:34 PM, Michal Hocko wrote: > >>> On Wed 30-09-20 11:30:49, Anshuman Khandual wrote: > >>>> Add following new vmstat events which will track HugeTLB page migration. > >>>> > >>>> 1. HUGETLB_MIGRATION_SUCCESS > >>>> 2. HUGETLB_MIGRATION_FAILURE > >>>> > >>>> It follows the existing semantics to accommodate HugeTLB subpages in total > >>>> page migration statistics. While here, this updates current trace event > >>>> 'mm_migrate_pages' to accommodate now available HugeTLB based statistics. > >>> What is the actual usecase? And how do you deal with the complexity > >>> introduced by many different hugetlb page sizes. Really, what is the > >>> point to having such a detailed view on hugetlb migration? > >>> > >> > >> It helps differentiate various page migration events i.e normal, THP and > >> HugeTLB and gives us more reliable and accurate measurement. Current stats > >> as per PGMIGRATE_SUCCESS and PGMIGRATE_FAIL are misleading, as they contain > >> both normal and HugeTLB pages as single entities, which is not accurate. > > > > Yes this is true. But why does it really matter? Do you have a specific > > example? > > An example which demonstrates that mixing and misrepresentation in these > stats create some problem ? Well, we could just create one scenario via > an application with different VMA types and triggering some migrations. > But the fact remains, that these stats are inaccurate and misleading > which is very clear and apparent. This doesn't sound like a proper justification to me. > >> After this change, PGMIGRATE_SUCCESS and PGMIGRATE_FAIL will contain page > >> migration statistics in terms of normal pages irrespective of whether any > >> previous migrations until that point involved normal pages, THP or HugeTLB > >> (any size) pages. At the least, this fixes existing misleading stats with > >> PGMIGRATE_SUCCESS and PGMIGRATE_FAIL. > >> > >> Besides, it helps us understand HugeTLB migrations in more detail. Even > >> though HugeTLB can be of various sizes on a given platform, these new > >> stats HUGETLB_MIGRATION_SUCCESS and HUGETLB_MIGRATION_FAILURE give enough > >> overall insight into HugeTLB migration events. > > > > While true this all is way too vague to add yet another imprecise > > counter. > > Given that user knows about all HugeTLB mappings it has got, these counters > are not really vague and could easily be related back. How can you tell a difference between different hugetlb sizes? > Moreover this change > completes the migration stats restructuring which was started with adding > THP counters. Otherwise it remains incomplete. Our counters will be always incomplete. Please do realize they are mere aid rather than a comprihensive toolset to identify the system behavior to the very detail. We have way too many counters and this particular ones are not adding much IMHO. At least not from your description which sounds to me like "Yeah, why not do that although there is no strong reason, well except that THP already has it so let's do it for hugetlb as well". I really do not want to deal with yet another report in few years that somebody wants to distinguish hugetlb migration of different sizes. Really, is there any _real_ practical use for these other than "nice, let's just have it"?
diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h index 18e75974d4e3..d1ddad835c19 100644 --- a/include/linux/vm_event_item.h +++ b/include/linux/vm_event_item.h @@ -60,6 +60,8 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT, THP_MIGRATION_SUCCESS, THP_MIGRATION_FAIL, THP_MIGRATION_SPLIT, + HUGETLB_MIGRATION_SUCCESS, + HUGETLB_MIGRATION_FAIL, #endif #ifdef CONFIG_COMPACTION COMPACTMIGRATE_SCANNED, COMPACTFREE_SCANNED, diff --git a/include/trace/events/migrate.h b/include/trace/events/migrate.h index 4d434398d64d..f8ffb8aece48 100644 --- a/include/trace/events/migrate.h +++ b/include/trace/events/migrate.h @@ -47,10 +47,11 @@ TRACE_EVENT(mm_migrate_pages, TP_PROTO(unsigned long succeeded, unsigned long failed, unsigned long thp_succeeded, unsigned long thp_failed, - unsigned long thp_split, enum migrate_mode mode, int reason), + unsigned long thp_split, unsigned long hugetlb_succeeded, + unsigned long hugetlb_failed, enum migrate_mode mode, int reason), TP_ARGS(succeeded, failed, thp_succeeded, thp_failed, - thp_split, mode, reason), + thp_split, hugetlb_succeeded, hugetlb_failed, mode, reason), TP_STRUCT__entry( __field( unsigned long, succeeded) @@ -58,6 +59,8 @@ TRACE_EVENT(mm_migrate_pages, __field( unsigned long, thp_succeeded) __field( unsigned long, thp_failed) __field( unsigned long, thp_split) + __field( unsigned long, hugetlb_succeeded) + __field( unsigned long, hugetlb_failed) __field( enum migrate_mode, mode) __field( int, reason) ), @@ -68,16 +71,20 @@ TRACE_EVENT(mm_migrate_pages, __entry->thp_succeeded = thp_succeeded; __entry->thp_failed = thp_failed; __entry->thp_split = thp_split; + __entry->hugetlb_succeeded = hugetlb_succeeded; + __entry->hugetlb_failed = hugetlb_failed; __entry->mode = mode; __entry->reason = reason; ), - TP_printk("nr_succeeded=%lu nr_failed=%lu nr_thp_succeeded=%lu nr_thp_failed=%lu nr_thp_split=%lu mode=%s reason=%s", + TP_printk("nr_succeeded=%lu nr_failed=%lu nr_thp_succeeded=%lu nr_thp_failed=%lu nr_thp_split=%lu nr_hugetlb_succeeded=%lu nr_hugetlb_failed=%lu mode=%s reason=%s", __entry->succeeded, __entry->failed, __entry->thp_succeeded, __entry->thp_failed, __entry->thp_split, + __entry->hugetlb_succeeded, + __entry->hugetlb_failed, __print_symbolic(__entry->mode, MIGRATE_MODE), __print_symbolic(__entry->reason, MIGRATE_REASON)) ); diff --git a/mm/migrate.c b/mm/migrate.c index 5ca5842df5db..0aac9d39778c 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1415,13 +1415,17 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, { int retry = 1; int thp_retry = 1; + int hugetlb_retry = 1; int nr_failed = 0; int nr_succeeded = 0; int nr_thp_succeeded = 0; int nr_thp_failed = 0; int nr_thp_split = 0; + int nr_hugetlb_succeeded = 0; + int nr_hugetlb_failed = 0; int pass = 0; bool is_thp = false; + bool is_hugetlb = false; struct page *page; struct page *page2; int swapwrite = current->flags & PF_SWAPWRITE; @@ -1430,9 +1434,10 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, if (!swapwrite) current->flags |= PF_SWAPWRITE; - for (pass = 0; pass < 10 && (retry || thp_retry); pass++) { + for (pass = 0; pass < 10 && (retry || thp_retry || hugetlb_retry); pass++) { retry = 0; thp_retry = 0; + hugetlb_retry = 0; list_for_each_entry_safe(page, page2, from, lru) { retry: @@ -1441,11 +1446,19 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, * Capture required information that might get lost * during migration. */ - is_thp = PageTransHuge(page) && !PageHuge(page); - nr_subpages = thp_nr_pages(page); + is_thp = false; + is_hugetlb = false; + if (PageTransHuge(page)) { + if (PageHuge(page)) + is_hugetlb = true; + else + is_thp = true; + } + nr_subpages = compound_nr(page); + cond_resched(); - if (PageHuge(page)) + if (is_hugetlb) rc = unmap_and_move_huge_page(get_new_page, put_new_page, private, page, pass > 2, mode, reason); @@ -1481,6 +1494,11 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, nr_failed += nr_subpages; goto out; } + if (is_hugetlb) { + nr_hugetlb_failed++; + nr_failed += nr_subpages; + goto out; + } nr_failed++; goto out; case -EAGAIN: @@ -1488,6 +1506,10 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, thp_retry++; break; } + if (is_hugetlb) { + hugetlb_retry++; + break; + } retry++; break; case MIGRATEPAGE_SUCCESS: @@ -1496,6 +1518,11 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, nr_succeeded += nr_subpages; break; } + if (is_hugetlb) { + nr_hugetlb_succeeded++; + nr_succeeded += nr_subpages; + break; + } nr_succeeded++; break; default: @@ -1510,13 +1537,19 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, nr_failed += nr_subpages; break; } + if (is_hugetlb) { + nr_hugetlb_failed++; + nr_failed += nr_subpages; + break; + } nr_failed++; break; } } } - nr_failed += retry + thp_retry; + nr_failed += retry + thp_retry + hugetlb_retry; nr_thp_failed += thp_retry; + nr_hugetlb_failed += hugetlb_retry; rc = nr_failed; out: count_vm_events(PGMIGRATE_SUCCESS, nr_succeeded); @@ -1524,8 +1557,11 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, count_vm_events(THP_MIGRATION_SUCCESS, nr_thp_succeeded); count_vm_events(THP_MIGRATION_FAIL, nr_thp_failed); count_vm_events(THP_MIGRATION_SPLIT, nr_thp_split); + count_vm_events(HUGETLB_MIGRATION_SUCCESS, nr_hugetlb_succeeded); + count_vm_events(HUGETLB_MIGRATION_FAIL, nr_hugetlb_failed); trace_mm_migrate_pages(nr_succeeded, nr_failed, nr_thp_succeeded, - nr_thp_failed, nr_thp_split, mode, reason); + nr_thp_failed, nr_thp_split, nr_hugetlb_succeeded, + nr_hugetlb_failed, mode, reason); if (!swapwrite) current->flags &= ~PF_SWAPWRITE; diff --git a/mm/vmstat.c b/mm/vmstat.c index 79e5cd0abd0e..12fd35ba135f 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1286,6 +1286,8 @@ const char * const vmstat_text[] = { "thp_migration_success", "thp_migration_fail", "thp_migration_split", + "hugetlb_migration_success", + "hugetlb_migration_fail", #endif #ifdef CONFIG_COMPACTION "compact_migrate_scanned",
Add following new vmstat events which will track HugeTLB page migration. 1. HUGETLB_MIGRATION_SUCCESS 2. HUGETLB_MIGRATION_FAILURE It follows the existing semantics to accommodate HugeTLB subpages in total page migration statistics. While here, this updates current trace event 'mm_migrate_pages' to accommodate now available HugeTLB based statistics. Cc: Daniel Jordan <daniel.m.jordan@oracle.com> Cc: Zi Yan <ziy@nvidia.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Oscar Salvador <osalvador@suse.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> --- Applies on linux-next and v5.9-rc7. Changes in RFC V2: - Added the missing hugetlb_retry in the loop per Oscar - Changed HugeTLB and THP detection sequence per Mike - Changed nr_subpages fetch from compound_nr() instead per Mike Changes in RFC V1: (https://patchwork.kernel.org/patch/11799395/) include/linux/vm_event_item.h | 2 ++ include/trace/events/migrate.h | 13 ++++++--- mm/migrate.c | 48 +++++++++++++++++++++++++++++----- mm/vmstat.c | 2 ++ 4 files changed, 56 insertions(+), 9 deletions(-)