diff mbox series

mm/vmstat: Add events for PMD based THP migration without split

Message ID 1590118444-21601-1-git-send-email-anshuman.khandual@arm.com (mailing list archive)
State New, archived
Headers show
Series mm/vmstat: Add events for PMD based THP migration without split | expand

Commit Message

Anshuman Khandual May 22, 2020, 3:34 a.m. UTC
This adds the following two new VM events which will help in validating PMD
based THP migration without split. Statistics reported through these events
will help in performance debugging.

1. THP_PMD_MIGRATION_SUCCESS
2. THP_PMD_MIGRATION_FAILURE

Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
[hughd: fixed oops on NULL newpage]
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
Changes in V1:

- Changed function name as thp_pmd_migration_success() per John
- Folded in a fix (https://patchwork.kernel.org/patch/11563009/) from Hugh

Changes in RFC V2: (https://patchwork.kernel.org/patch/11554861/)

- Decopupled and renamed VM events from their implementation per Zi and John
- Added THP_PMD_MIGRATION_FAILURE VM event upon allocation failure and split

Changes in RFC V1: (https://patchwork.kernel.org/patch/11542055/)

 include/linux/vm_event_item.h |  4 ++++
 mm/migrate.c                  | 23 +++++++++++++++++++++++
 mm/vmstat.c                   |  4 ++++
 3 files changed, 31 insertions(+)

Comments

Daniel Jordan June 1, 2020, 4:57 p.m. UTC | #1
Hi Anshuman,

On Fri, May 22, 2020 at 09:04:04AM +0530, Anshuman Khandual wrote:
> This adds the following two new VM events which will help in validating PMD
> based THP migration without split. Statistics reported through these events
> will help in performance debugging.
> 
> 1. THP_PMD_MIGRATION_SUCCESS
> 2. THP_PMD_MIGRATION_FAILURE

The names suggest a binary event similar to the existing
pgmigrate_success/fail, but FAILURE only tracks one kind of migration error,
and then only when the thp is successfully split, so shouldn't it be called
SPLIT instead?
John Hubbard June 2, 2020, 3:20 a.m. UTC | #2
On 2020-06-01 09:57, Daniel Jordan wrote:
> Hi Anshuman,
> 
> On Fri, May 22, 2020 at 09:04:04AM +0530, Anshuman Khandual wrote:
>> This adds the following two new VM events which will help in validating PMD
>> based THP migration without split. Statistics reported through these events
>> will help in performance debugging.
>>
>> 1. THP_PMD_MIGRATION_SUCCESS
>> 2. THP_PMD_MIGRATION_FAILURE
> 
> The names suggest a binary event similar to the existing
> pgmigrate_success/fail, but FAILURE only tracks one kind of migration error,
> and then only when the thp is successfully split, so shouldn't it be called
> SPLIT instead?
> 

So the full description of the situation, which we're trying to compress into
a shorter name, is "THP pmd migration failure, due to successfully splitting
the THP". From that, the beginning part is the real point here, while the last
part is less important. In other words, the users of these events are people
who are trying to quantify THP migrations, and these events are particularly
relevant for that. The "THP migration failed" is more important here than
the reason that it failed. Or so I believe so far.

So I still think that the names are really quite good, but your point is
also important: maybe this patch should also be tracking other causes
of THP PMD migration failure, in order to get a truer accounting of the
situation.

thanks,
Anshuman Khandual June 2, 2020, 4:20 a.m. UTC | #3
On 06/02/2020 08:50 AM, John Hubbard wrote:
> On 2020-06-01 09:57, Daniel Jordan wrote:
>> Hi Anshuman,
>>
>> On Fri, May 22, 2020 at 09:04:04AM +0530, Anshuman Khandual wrote:
>>> This adds the following two new VM events which will help in validating PMD
>>> based THP migration without split. Statistics reported through these events
>>> will help in performance debugging.
>>>
>>> 1. THP_PMD_MIGRATION_SUCCESS
>>> 2. THP_PMD_MIGRATION_FAILURE
>>
>> The names suggest a binary event similar to the existing
>> pgmigrate_success/fail, but FAILURE only tracks one kind of migration error,
>> and then only when the thp is successfully split, so shouldn't it be called
>> SPLIT instead?
>>
> 
> So the full description of the situation, which we're trying to compress into
> a shorter name, is "THP pmd migration failure, due to successfully splitting
> the THP". From that, the beginning part is the real point here, while the last
> part is less important. In other words, the users of these events are people
> who are trying to quantify THP migrations, and these events are particularly
> relevant for that. The "THP migration failed" is more important here than
> the reason that it failed. Or so I believe so far.

Absolutely, these events really help in quantifying THP migration successes
or their failures that involve splitting.

> 
> So I still think that the names are really quite good, but your point is

Agreed.

> also important: maybe this patch should also be tracking other causes
> of THP PMD migration failure, in order to get a truer accounting of the
> situation.
Is there any other failure reasons which are only specific to THP migration.
Else, adding stats about generic migration failure reasons will just blur
the overall understanding about THP migration successes and failure cases
that results in splitting.
John Hubbard June 2, 2020, 4:48 a.m. UTC | #4
On 2020-06-01 21:20, Anshuman Khandual wrote:
...
>> also important: maybe this patch should also be tracking other causes
>> of THP PMD migration failure, in order to get a truer accounting of the
>> situation.

I hope one of the experts here can weigh in on that...

> Is there any other failure reasons which are only specific to THP migration.
> Else, adding stats about generic migration failure reasons will just blur
> the overall understanding about THP migration successes and failure cases
> that results in splitting.
> 

Thinking about that: we do have PGMIGRATE_SUCCESS and PGMIGRATE_FAIL, so I
suppose comparing those numbers with the new THP_PMD_MIGRATION_SUCCESS and
THP_PMD_MIGRATION_FAILURE events should cover it.

However, the fact that this is under discussion hints at the need for a
bit of documentation help. What do you think about adding some notes about
all of this to, say, Documentation/vm/page_migration.rst ?

thanks,
Anshuman Khandual June 2, 2020, 5:30 a.m. UTC | #5
On 06/02/2020 10:18 AM, John Hubbard wrote:
> On 2020-06-01 21:20, Anshuman Khandual wrote:
> ...
>>> also important: maybe this patch should also be tracking other causes
>>> of THP PMD migration failure, in order to get a truer accounting of the
>>> situation.
> 
> I hope one of the experts here can weigh in on that...
> 
>> Is there any other failure reasons which are only specific to THP migration.
>> Else, adding stats about generic migration failure reasons will just blur
>> the overall understanding about THP migration successes and failure cases
>> that results in splitting.
>>
> 
> Thinking about that: we do have PGMIGRATE_SUCCESS and PGMIGRATE_FAIL, so I
> suppose comparing those numbers with the new THP_PMD_MIGRATION_SUCCESS and
> THP_PMD_MIGRATION_FAILURE events should cover it.

That is right.

> 
> However, the fact that this is under discussion hints at the need for a
> bit of documentation help. What do you think about adding some notes about
> all of this to, say, Documentation/vm/page_migration.rst ?

Sure but probably bit later. Because I am also planning to add couple of
trace events for THP migration, hence will update the documentation part
for both VM stat and trace events together.
Daniel Jordan June 2, 2020, 2:52 p.m. UTC | #6
On Mon, Jun 01, 2020 at 09:48:09PM -0700, John Hubbard wrote:
> However, the fact that this is under discussion hints at the need for a
> bit of documentation help. What do you think about adding some notes about
> all of this to, say, Documentation/vm/page_migration.rst ?

Yes, that would be good.  I understand the intent better now but still think
the 'failure' event could be misinterpreted as outright failure instead of
splitting followed by successfully moving the constituent pages.

It might help to clarify in the changelog as well.
Matthew Wilcox June 2, 2020, 3:01 p.m. UTC | #7
On Fri, May 22, 2020 at 09:04:04AM +0530, Anshuman Khandual wrote:
> This adds the following two new VM events which will help in validating PMD
> based THP migration without split. Statistics reported through these events
> will help in performance debugging.
> 
> 1. THP_PMD_MIGRATION_SUCCESS
> 2. THP_PMD_MIGRATION_FAILURE

There's nothing actually PMD specific about these events, is there?
If we have a THP of a non-PMD size, you'd want that reported through the
same statistic, wouldn't you?
Anshuman Khandual June 3, 2020, 1:26 a.m. UTC | #8
On 06/02/2020 08:31 PM, Matthew Wilcox wrote:
> On Fri, May 22, 2020 at 09:04:04AM +0530, Anshuman Khandual wrote:
>> This adds the following two new VM events which will help in validating PMD
>> based THP migration without split. Statistics reported through these events
>> will help in performance debugging.
>>
>> 1. THP_PMD_MIGRATION_SUCCESS
>> 2. THP_PMD_MIGRATION_FAILURE
> 
> There's nothing actually PMD specific about these events, is there?
> If we have a THP of a non-PMD size, you'd want that reported through the
> same statistic, wouldn't you?

Yes, there is nothing PMD specific here and we would use the same statistics
for non-PMD size THP migration (if any) as well. But is THP migration really
supported for non-PMD sizes ? CONFIG_ARCH_ENABLE_THP_MIGRATION depends upon
CONFIG_TRANSPARENT_HUGEPAGE without being specific or denying about possible
PUD level support. Fair enough, will drop the PMD from the events and their
functions.
Matthew Wilcox June 3, 2020, 2:57 a.m. UTC | #9
On Wed, Jun 03, 2020 at 06:56:57AM +0530, Anshuman Khandual wrote:
> On 06/02/2020 08:31 PM, Matthew Wilcox wrote:
> > On Fri, May 22, 2020 at 09:04:04AM +0530, Anshuman Khandual wrote:
> >> This adds the following two new VM events which will help in validating PMD
> >> based THP migration without split. Statistics reported through these events
> >> will help in performance debugging.
> >>
> >> 1. THP_PMD_MIGRATION_SUCCESS
> >> 2. THP_PMD_MIGRATION_FAILURE
> > 
> > There's nothing actually PMD specific about these events, is there?
> > If we have a THP of a non-PMD size, you'd want that reported through the
> > same statistic, wouldn't you?
> 
> Yes, there is nothing PMD specific here and we would use the same statistics
> for non-PMD size THP migration (if any) as well. But is THP migration really
> supported for non-PMD sizes ? CONFIG_ARCH_ENABLE_THP_MIGRATION depends upon
> CONFIG_TRANSPARENT_HUGEPAGE without being specific or denying about possible
> PUD level support. Fair enough, will drop the PMD from the events and their
> functions.

I guess you haven't read my large pages patchset?
Anshuman Khandual June 3, 2020, 4:36 a.m. UTC | #10
On 06/02/2020 08:22 PM, Daniel Jordan wrote:
> On Mon, Jun 01, 2020 at 09:48:09PM -0700, John Hubbard wrote:
>> However, the fact that this is under discussion hints at the need for a
>> bit of documentation help. What do you think about adding some notes about
>> all of this to, say, Documentation/vm/page_migration.rst ?
> 
> Yes, that would be good.  I understand the intent better now but still think
> the 'failure' event could be misinterpreted as outright failure instead of
> splitting followed by successfully moving the constituent pages.

Does this look okay and sufficient ?

--- a/Documentation/vm/page_migration.rst
+++ b/Documentation/vm/page_migration.rst
@@ -253,5 +253,20 @@ which are function pointers of struct address_space_operations.
      PG_isolated is alias with PG_reclaim flag so driver shouldn't use the flag
      for own purpose.
 
+Quantifying Migration
+=====================
+Following events can be used to quantify page migration.
+
+- PGMIGRATE_SUCCESS
+- PGMIGRATE_FAIL
+- THP_MIGRATION_SUCCESS
+- THP_MIGRATION_FAILURE
+
+THP_MIGRATION_FAILURE in particular represents an event when a THP could not be
+migrated as a single entity following an allocation failure and ended up getting
+split into constituent normal pages before being retried. This event, along with
+PGMIGRATE_SUCCESS and PGMIGRATE_FAIL will help in quantifying and analyzing THP
+migration events including both success and failure cases.
+


> 
> It might help to clarify in the changelog as well.
> 

Sure, will update the commit message accordingly.
Anshuman Khandual June 3, 2020, 4:58 a.m. UTC | #11
On 06/03/2020 08:27 AM, Matthew Wilcox wrote:
> On Wed, Jun 03, 2020 at 06:56:57AM +0530, Anshuman Khandual wrote:
>> On 06/02/2020 08:31 PM, Matthew Wilcox wrote:
>>> On Fri, May 22, 2020 at 09:04:04AM +0530, Anshuman Khandual wrote:
>>>> This adds the following two new VM events which will help in validating PMD
>>>> based THP migration without split. Statistics reported through these events
>>>> will help in performance debugging.
>>>>
>>>> 1. THP_PMD_MIGRATION_SUCCESS
>>>> 2. THP_PMD_MIGRATION_FAILURE
>>>
>>> There's nothing actually PMD specific about these events, is there?
>>> If we have a THP of a non-PMD size, you'd want that reported through the
>>> same statistic, wouldn't you?
>>
>> Yes, there is nothing PMD specific here and we would use the same statistics
>> for non-PMD size THP migration (if any) as well. But is THP migration really
>> supported for non-PMD sizes ? CONFIG_ARCH_ENABLE_THP_MIGRATION depends upon
>> CONFIG_TRANSPARENT_HUGEPAGE without being specific or denying about possible
>> PUD level support. Fair enough, will drop the PMD from the events and their
>> functions.
> 
> I guess you haven't read my large pages patchset?

I believe you are referring this "[PATCH v5 00/39] Large pages in the page cache"
(https://lkml.org/lkml/2020/5/28/1755). Unfortunately, I have not been following
the series. But is there something else in particular that needs to be taken care
of as well ?
Matthew Wilcox June 3, 2020, 11:09 a.m. UTC | #12
On Wed, Jun 03, 2020 at 10:28:41AM +0530, Anshuman Khandual wrote:
> On 06/03/2020 08:27 AM, Matthew Wilcox wrote:
> > On Wed, Jun 03, 2020 at 06:56:57AM +0530, Anshuman Khandual wrote:
> >> On 06/02/2020 08:31 PM, Matthew Wilcox wrote:
> >>> On Fri, May 22, 2020 at 09:04:04AM +0530, Anshuman Khandual wrote:
> >>>> This adds the following two new VM events which will help in validating PMD
> >>>> based THP migration without split. Statistics reported through these events
> >>>> will help in performance debugging.
> >>>>
> >>>> 1. THP_PMD_MIGRATION_SUCCESS
> >>>> 2. THP_PMD_MIGRATION_FAILURE
> >>>
> >>> There's nothing actually PMD specific about these events, is there?
> >>> If we have a THP of a non-PMD size, you'd want that reported through the
> >>> same statistic, wouldn't you?
> >>
> >> Yes, there is nothing PMD specific here and we would use the same statistics
> >> for non-PMD size THP migration (if any) as well. But is THP migration really
> >> supported for non-PMD sizes ? CONFIG_ARCH_ENABLE_THP_MIGRATION depends upon
> >> CONFIG_TRANSPARENT_HUGEPAGE without being specific or denying about possible
> >> PUD level support. Fair enough, will drop the PMD from the events and their
> >> functions.
> > 
> > I guess you haven't read my large pages patchset?
> 
> I believe you are referring this "[PATCH v5 00/39] Large pages in the page cache"
> (https://lkml.org/lkml/2020/5/28/1755). Unfortunately, I have not been following
> the series. But is there something else in particular that needs to be taken care
> of as well ?

I don't think so, but I haven't looked at the migration path at all.
I'm hoping it "just works", but experience with the rest of the mm has
taught me there's probably an assumption in there that THP => PMD that
will need to be fixed.  I'm not currently testing on a NUMA machine, and
I'm still debugging other paths.
Daniel Jordan June 3, 2020, 4:08 p.m. UTC | #13
On Wed, Jun 03, 2020 at 10:06:31AM +0530, Anshuman Khandual wrote:
> Does this look okay and sufficient ?
> 
> --- a/Documentation/vm/page_migration.rst
> +++ b/Documentation/vm/page_migration.rst
> @@ -253,5 +253,20 @@ which are function pointers of struct address_space_operations.
>       PG_isolated is alias with PG_reclaim flag so driver shouldn't use the flag
>       for own purpose.
>  
> +Quantifying Migration
> +=====================
> +Following events can be used to quantify page migration.
> +
> +- PGMIGRATE_SUCCESS
> +- PGMIGRATE_FAIL
> +- THP_MIGRATION_SUCCESS
> +- THP_MIGRATION_FAILURE
> +
> +THP_MIGRATION_FAILURE in particular represents an event when a THP could not be
> +migrated as a single entity following an allocation failure and ended up getting
> +split into constituent normal pages before being retried. This event, along with
> +PGMIGRATE_SUCCESS and PGMIGRATE_FAIL will help in quantifying and analyzing THP
> +migration events including both success and failure cases.

Looks great!

> Sure, will update the commit message accordingly.

Thanks.  Hopefully these will help someone in the future.
diff mbox series

Patch

diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
index ffef0f279747..23d8f9884c2b 100644
--- a/include/linux/vm_event_item.h
+++ b/include/linux/vm_event_item.h
@@ -91,6 +91,10 @@  enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
 		THP_ZERO_PAGE_ALLOC_FAILED,
 		THP_SWPOUT,
 		THP_SWPOUT_FALLBACK,
+#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
+		THP_PMD_MIGRATION_SUCCESS,
+		THP_PMD_MIGRATION_FAILURE,
+#endif
 #endif
 #ifdef CONFIG_MEMORY_BALLOON
 		BALLOON_INFLATE,
diff --git a/mm/migrate.c b/mm/migrate.c
index 7160c1556f79..37f30bcfd628 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1170,6 +1170,20 @@  static int __unmap_and_move(struct page *page, struct page *newpage,
 #define ICE_noinline
 #endif
 
+#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
+static inline void thp_pmd_migration_success(bool success)
+{
+	if (success)
+		count_vm_event(THP_PMD_MIGRATION_SUCCESS);
+	else
+		count_vm_event(THP_PMD_MIGRATION_FAILURE);
+}
+#else
+static inline void thp_pmd_migration_success(bool success)
+{
+}
+#endif
+
 /*
  * Obtain the lock on page, remove all ptes and migrate the page
  * to the newly allocated page in newpage.
@@ -1232,6 +1246,14 @@  static ICE_noinline int unmap_and_move(new_page_t get_new_page,
 	 * we want to retry.
 	 */
 	if (rc == MIGRATEPAGE_SUCCESS) {
+		/*
+		 * When the page to be migrated has been freed from under
+		 * us, that is considered a MIGRATEPAGE_SUCCESS, but no
+		 * newpage has been allocated. It should not be counted
+		 * as a successful THP migration.
+		 */
+		if (newpage && PageTransHuge(newpage))
+			thp_pmd_migration_success(true);
 		put_page(page);
 		if (reason == MR_MEMORY_FAILURE) {
 			/*
@@ -1474,6 +1496,7 @@  int migrate_pages(struct list_head *from, new_page_t get_new_page,
 					unlock_page(page);
 					if (!rc) {
 						list_safe_reset_next(page, page2, lru);
+						thp_pmd_migration_success(false);
 						goto retry;
 					}
 				}
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 96d21a792b57..e258c782fd3a 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1274,6 +1274,10 @@  const char * const vmstat_text[] = {
 	"thp_zero_page_alloc_failed",
 	"thp_swpout",
 	"thp_swpout_fallback",
+#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
+	"thp_pmd_migration_success",
+	"thp_pmd_migration_failure",
+#endif
 #endif
 #ifdef CONFIG_MEMORY_BALLOON
 	"balloon_inflate",