Message ID | 20240704012905.42971-3-ioworker0@gmail.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm: introduce per-order mTHP split counters | expand |
On 04.07.24 03:29, Lance Yang wrote: > This commit introduces documentation for mTHP split counters in > transhuge.rst. > > Reviewed-by: Barry Song <baohua@kernel.org> > Signed-off-by: Mingzhe Yang <mingzhe.yang@ly.com> > Signed-off-by: Lance Yang <ioworker0@gmail.com> > --- > Documentation/admin-guide/mm/transhuge.rst | 20 ++++++++++++++++---- > 1 file changed, 16 insertions(+), 4 deletions(-) > > diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst > index 1f72b00af5d3..0830aa173a8b 100644 > --- a/Documentation/admin-guide/mm/transhuge.rst > +++ b/Documentation/admin-guide/mm/transhuge.rst > @@ -369,10 +369,6 @@ also applies to the regions registered in khugepaged. > Monitoring usage > ================ > > -.. note:: > - Currently the below counters only record events relating to > - PMD-sized THP. Events relating to other THP sizes are not included. > - > The number of PMD-sized anonymous transparent huge pages currently used by the > system is available by reading the AnonHugePages field in ``/proc/meminfo``. > To identify what applications are using PMD-sized anonymous transparent huge > @@ -514,6 +510,22 @@ file_fallback_charge > falls back to using small pages even though the allocation was > successful. > > +split > + is incremented every time a huge page is successfully split into > + smaller orders. This can happen for a variety of reasons but a > + common reason is that a huge page is old and is being reclaimed. > + This action implies splitting any block mappings into PTEs. > + > +split_failed > + is incremented if kernel fails to split huge > + page. This can happen if the page was pinned by somebody. > + > +split_deferred > + is incremented when a huge page is put onto split > + queue. This happens when a huge page is partially unmapped and > + splitting it would free up some memory. Pages on split queue are > + going to be split under memory pressure. ".. if splitting is possible." ;) Acked-by: David Hildenbrand <david@redhat.com>
On 04/07/2024 02:29, Lance Yang wrote: > This commit introduces documentation for mTHP split counters in > transhuge.rst. > > Reviewed-by: Barry Song <baohua@kernel.org> > Signed-off-by: Mingzhe Yang <mingzhe.yang@ly.com> > Signed-off-by: Lance Yang <ioworker0@gmail.com> > --- > Documentation/admin-guide/mm/transhuge.rst | 20 ++++++++++++++++---- > 1 file changed, 16 insertions(+), 4 deletions(-) > > diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst > index 1f72b00af5d3..0830aa173a8b 100644 > --- a/Documentation/admin-guide/mm/transhuge.rst > +++ b/Documentation/admin-guide/mm/transhuge.rst > @@ -369,10 +369,6 @@ also applies to the regions registered in khugepaged. > Monitoring usage > ================ > > -.. note:: > - Currently the below counters only record events relating to > - PMD-sized THP. Events relating to other THP sizes are not included. > - > The number of PMD-sized anonymous transparent huge pages currently used by the > system is available by reading the AnonHugePages field in ``/proc/meminfo``. > To identify what applications are using PMD-sized anonymous transparent huge > @@ -514,6 +510,22 @@ file_fallback_charge > falls back to using small pages even though the allocation was > successful. > > +split > + is incremented every time a huge page is successfully split into > + smaller orders. This can happen for a variety of reasons but a > + common reason is that a huge page is old and is being reclaimed. > + This action implies splitting any block mappings into PTEs. nit: the block mappings will already be PTEs if starting with mTHP? regardless: Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> > + > +split_failed > + is incremented if kernel fails to split huge > + page. This can happen if the page was pinned by somebody. > + > +split_deferred > + is incremented when a huge page is put onto split > + queue. This happens when a huge page is partially unmapped and > + splitting it would free up some memory. Pages on split queue are > + going to be split under memory pressure. > + > As the system ages, allocating huge pages may be expensive as the > system uses memory compaction to copy data around memory to free a > huge page for use. There are some counters in ``/proc/vmstat`` to help
On 05.07.24 11:16, Ryan Roberts wrote: > On 04/07/2024 02:29, Lance Yang wrote: >> This commit introduces documentation for mTHP split counters in >> transhuge.rst. >> >> Reviewed-by: Barry Song <baohua@kernel.org> >> Signed-off-by: Mingzhe Yang <mingzhe.yang@ly.com> >> Signed-off-by: Lance Yang <ioworker0@gmail.com> >> --- >> Documentation/admin-guide/mm/transhuge.rst | 20 ++++++++++++++++---- >> 1 file changed, 16 insertions(+), 4 deletions(-) >> >> diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst >> index 1f72b00af5d3..0830aa173a8b 100644 >> --- a/Documentation/admin-guide/mm/transhuge.rst >> +++ b/Documentation/admin-guide/mm/transhuge.rst >> @@ -369,10 +369,6 @@ also applies to the regions registered in khugepaged. >> Monitoring usage >> ================ >> >> -.. note:: >> - Currently the below counters only record events relating to >> - PMD-sized THP. Events relating to other THP sizes are not included. >> - >> The number of PMD-sized anonymous transparent huge pages currently used by the >> system is available by reading the AnonHugePages field in ``/proc/meminfo``. >> To identify what applications are using PMD-sized anonymous transparent huge >> @@ -514,6 +510,22 @@ file_fallback_charge >> falls back to using small pages even though the allocation was >> successful. >> >> +split >> + is incremented every time a huge page is successfully split into >> + smaller orders. This can happen for a variety of reasons but a >> + common reason is that a huge page is old and is being reclaimed. >> + This action implies splitting any block mappings into PTEs. > > nit: the block mappings will already be PTEs if starting with mTHP? Was confused by that as well, so maybe just drop that detail here.
Hi David and Ryan, Thanks for taking time to review! Updated the doc. How about the following? split is incremented every time a huge page is successfully split into smaller orders. This can happen for a variety of reasons but a common reason is that a huge page is old and is being reclaimed. split_failed is incremented if kernel fails to split huge page. This can happen if the page was pinned by somebody. split_deferred is incremented when a huge page is put onto split queue. This happens when a huge page is partially unmapped and splitting it would free up some memory. Pages on split queue are going to be split under memory pressure, if splitting is possible. Thanks, Lance
Hi Andrew, Could you please fold the following changes into this patch? diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst index 747c811ee8f1..fe237825b95c 100644 --- a/Documentation/admin-guide/mm/transhuge.rst +++ b/Documentation/admin-guide/mm/transhuge.rst @@ -513,17 +513,16 @@ split is incremented every time a huge page is successfully split into smaller orders. This can happen for a variety of reasons but a common reason is that a huge page is old and is being reclaimed. - This action implies splitting any block mappings into PTEs. split_failed is incremented if kernel fails to split huge page. This can happen if the page was pinned by somebody. split_deferred - is incremented when a huge page is put onto split - queue. This happens when a huge page is partially unmapped and - splitting it would free up some memory. Pages on split queue are - going to be split under memory pressure. + is incremented when a huge page is put onto split queue. + This happens when a huge page is partially unmapped and splitting + it would free up some memory. Pages on split queue are going to + be split under memory pressure, if splitting is possible. As the system ages, allocating huge pages may be expensive as the system uses memory compaction to copy data around memory to free a
diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst index 1f72b00af5d3..0830aa173a8b 100644 --- a/Documentation/admin-guide/mm/transhuge.rst +++ b/Documentation/admin-guide/mm/transhuge.rst @@ -369,10 +369,6 @@ also applies to the regions registered in khugepaged. Monitoring usage ================ -.. note:: - Currently the below counters only record events relating to - PMD-sized THP. Events relating to other THP sizes are not included. - The number of PMD-sized anonymous transparent huge pages currently used by the system is available by reading the AnonHugePages field in ``/proc/meminfo``. To identify what applications are using PMD-sized anonymous transparent huge @@ -514,6 +510,22 @@ file_fallback_charge falls back to using small pages even though the allocation was successful. +split + is incremented every time a huge page is successfully split into + smaller orders. This can happen for a variety of reasons but a + common reason is that a huge page is old and is being reclaimed. + This action implies splitting any block mappings into PTEs. + +split_failed + is incremented if kernel fails to split huge + page. This can happen if the page was pinned by somebody. + +split_deferred + is incremented when a huge page is put onto split + queue. This happens when a huge page is partially unmapped and + splitting it would free up some memory. Pages on split queue are + going to be split under memory pressure. + As the system ages, allocating huge pages may be expensive as the system uses memory compaction to copy data around memory to free a huge page for use. There are some counters in ``/proc/vmstat`` to help