Message ID | 20231204102027.57185-1-ryan.roberts@arm.com (mailing list archive) |
---|---|
Headers | show |
Series | Multi-size THP for anonymous memory | expand |
On Mon, 4 Dec 2023 10:20:17 +0000 Ryan Roberts <ryan.roberts@arm.com> wrote: > Hi All, > > > Prerequisites > ============= > > Some work items identified as being prerequisites are listed on page 3 at [9]. > The summary is: > > | item | status | > |:------------------------------|:------------------------| > | mlock | In mainline (v6.7) | > | madvise | In mainline (v6.6) | > | compaction | v1 posted [10] | > | numa balancing | Investigated: see below | > | user-triggered page migration | In mainline (v6.7) | > | khugepaged collapse | In mainline (NOP) | What does "prerequisites" mean here? Won't compile without? Kernel crashes without? Nice-to-have-after? Please expand on this. I looked at [9], but access is denied. > [9] https://drive.google.com/file/d/1GnfYFpr7_c1kA41liRUW5YtCb8Cj18Ud/view?usp=sharing&resourcekey=0-U1Mj3-RhLD1JV6EThpyPyA
On Mon, Dec 4, 2023 at 11:20 PM Ryan Roberts <ryan.roberts@arm.com> wrote: > > Hi All, > > A new week, a new version, a new name... This is v8 of a series to implement > multi-size THP (mTHP) for anonymous memory (previously called "small-sized THP" > and "large anonymous folios"). Matthew objected to "small huge" so hopefully > this fares better. > > The objective of this is to improve performance by allocating larger chunks of > memory during anonymous page faults: > > 1) Since SW (the kernel) is dealing with larger chunks of memory than base > pages, there are efficiency savings to be had; fewer page faults, batched PTE > and RMAP manipulation, reduced lru list, etc. In short, we reduce kernel > overhead. This should benefit all architectures. > 2) Since we are now mapping physically contiguous chunks of memory, we can take > advantage of HW TLB compression techniques. A reduction in TLB pressure > speeds up kernel and user space. arm64 systems have 2 mechanisms to coalesce > TLB entries; "the contiguous bit" (architectural) and HPA (uarch). > > This version changes the name and tidies up some of the kernel code and test > code, based on feedback against v7 (see change log for details). > > By default, the existing behaviour (and performance) is maintained. The user > must explicitly enable multi-size THP to see the performance benefit. This is > done via a new sysfs interface (as recommended by David Hildenbrand - thanks to > David for the suggestion)! This interface is inspired by the existing > per-hugepage-size sysfs interface used by hugetlb, provides full backwards > compatibility with the existing PMD-size THP interface, and provides a base for > future extensibility. See [8] for detailed discussion of the interface. > > This series is based on mm-unstable (715b67adf4c8). > > > Prerequisites > ============= > > Some work items identified as being prerequisites are listed on page 3 at [9]. > The summary is: > > | item | status | > |:------------------------------|:------------------------| > | mlock | In mainline (v6.7) | > | madvise | In mainline (v6.6) | > | compaction | v1 posted [10] | > | numa balancing | Investigated: see below | > | user-triggered page migration | In mainline (v6.7) | > | khugepaged collapse | In mainline (NOP) | > > On NUMA balancing, which currently ignores any PTE-mapped THPs it encounters, > John Hubbard has investigated this and concluded that it is A) not clear at the > moment what a better policy might be for PTE-mapped THP and B) questions whether > this should really be considered a prerequisite given no regression is caused > for the default "multi-size THP disabled" case, and there is no correctness > issue when it is enabled - its just a potential for non-optimal performance. > > If there are no disagreements about removing numa balancing from the list (none > were raised when I first posted this comment against v7), then that just leaves > compaction which is in review on list at the moment. > > I really would like to get this series (and its remaining comapction > prerequisite) in for v6.8. I accept that it may be a bit optimistic at this > point, but lets see where we get to with review? > Hi Ryan, A question but i don't think it should block this series, do we have any plan to extend /proc/meminfo, /proc/pid/smaps, /proc/vmstat to present some information regarding the new multi-size THP. e.g how many folios in each-size for the system, how many multi-size folios LRU, how many large folios in each VMA etc. In products and labs, we need some health monitors to make sure the system status is visible and works as expected. right now, i feel i am like blindly exploring the system without those statistics. > > Testing > ======= > > The series includes patches for mm selftests to enlighten the cow and khugepaged > tests to explicitly test with multi-size THP, in the same way that PMD-sized > THP is tested. The new tests all pass, and no regressions are observed in the mm > selftest suite. I've also run my usual kernel compilation and java script > benchmarks without any issues. > > Refer to my performance numbers posted with v6 [6]. (These are for multi-size > THP only - they do not include the arm64 contpte follow-on series). > > John Hubbard at Nvidia has indicated dramatic 10x performance improvements for > some workloads at [11]. (Observed using v6 of this series as well as the arm64 > contpte series). > > Kefeng Wang at Huawei has also indicated he sees improvements at [12] although > there are some latency regressions also. > > > Changes since v7 [7] > ==================== > > - Renamed "small-sized THP" -> "multi-size THP" in commit logs > - Added various Reviewed-by/Tested-by tags (Barry, David, Alistair) > - Patch 3: > - Fine-tuned transhuge documentation multi-size THP (JohnH) > - Converted hugepage_global_enabled() and hugepage_global_always() macros > to static inline functions (JohnH) > - Renamed hugepage_vma_check() to thp_vma_allowable_orders() (JohnH) > - Renamed transhuge_vma_suitable() to thp_vma_suitable_orders() (JohnH) > - Renamed "global" enabled sysfs file option to "inherit" (JohnH) > - Patch 9: > - cow selftest: Renamed param size -> thpsize (David) > - cow selftest: Changed test fail to assert() (David) > - cow selftest: Log PMD size separately from all the supported THP sizes > (David) > - Patch 10: > - cow selftest: No longer special case pmdsize; keep all THP sizes in > thpsizes[] > > > Changes since v6 [6] > ==================== > > - Refactored vmf_pte_range_changed() to remove uffd special-case (suggested by > JohnH) > - Dropped accounting patch (#3 in v6) (suggested by DavidH) > - Continue to account *PMD-sized* THP only for now > - Can add more counters in future if needed > - Page cache large folios haven't needed any new counters yet > - Pivot to sysfs ABI proposed by DavidH > - per-size directories in a similar shape to that used by hugetlb > - Dropped "recommend" keyword patch (#6 in v6) (suggested by DavidH, Yu Zhou) > - For now, users need to understand implicitly which sizes are beneficial > to their HW/SW > - Dropped arch_wants_pte_order() patch (#7 in v6) > - No longer needed due to dropping patch "recommend" keyword patch > - Enlightened khugepaged mm selftest to explicitly test with small-size THP > - Scrubbed commit logs to use "small-sized THP" consistently (suggested by > DavidH) > > > Changes since v5 [5] > ==================== > > - Added accounting for PTE-mapped THPs (patch 3) > - Added runtime control mechanism via sysfs as extension to THP (patch 4) > - Minor refactoring of alloc_anon_folio() to integrate with runtime controls > - Stripped out hardcoded policy for allocation order; its now all user space > controlled (although user space can request "recommend" which will configure > the HW-preferred order) > > > Changes since v4 [4] > ==================== > > - Removed "arm64: mm: Override arch_wants_pte_order()" patch; arm64 > now uses the default order-3 size. I have moved this patch over to > the contpte series. > - Added "mm: Allow deferred splitting of arbitrary large anon folios" back > into series. I originally removed this at v2 to add to a separate series, > but that series has transformed significantly and it no longer fits, so > bringing it back here. > - Reintroduced dependency on set_ptes(); Originally dropped this at v2, but > set_ptes() is in mm-unstable now. > - Updated policy for when to allocate LAF; only fallback to order-0 if > MADV_NOHUGEPAGE is present or if THP disabled via prctl; no longer rely on > sysfs's never/madvise/always knob. > - Fallback to order-0 whenever uffd is armed for the vma, not just when > uffd-wp is set on the pte. > - alloc_anon_folio() now returns `struct folio *`, where errors are encoded > with ERR_PTR(). > > The last 3 changes were proposed by Yu Zhao - thanks! > > > Changes since v3 [3] > ==================== > > - Renamed feature from FLEXIBLE_THP to LARGE_ANON_FOLIO. > - Removed `flexthp_unhinted_max` boot parameter. Discussion concluded that a > sysctl is preferable but we will wait until real workload needs it. > - Fixed uninitialized `addr` on read fault path in do_anonymous_page(). > - Added mm selftests for large anon folios in cow test suite. > > > Changes since v2 [2] > ==================== > > - Dropped commit "Allow deferred splitting of arbitrary large anon folios" > - Huang, Ying suggested the "batch zap" work (which I dropped from this > series after v1) is a prerequisite for merging FLXEIBLE_THP, so I've > moved the deferred split patch to a separate series along with the batch > zap changes. I plan to submit this series early next week. > - Changed folio order fallback policy > - We no longer iterate from preferred to 0 looking for acceptable policy > - Instead we iterate through preferred, PAGE_ALLOC_COSTLY_ORDER and 0 only > - Removed vma parameter from arch_wants_pte_order() > - Added command line parameter `flexthp_unhinted_max` > - clamps preferred order when vma hasn't explicitly opted-in to THP > - Never allocate large folio for MADV_NOHUGEPAGE vma (or when THP is disabled > for process or system). > - Simplified implementation and integration with do_anonymous_page() > - Removed dependency on set_ptes() > > > Changes since v1 [1] > ==================== > > - removed changes to arch-dependent vma_alloc_zeroed_movable_folio() > - replaced with arch-independent alloc_anon_folio() > - follows THP allocation approach > - no longer retry with intermediate orders if allocation fails > - fallback directly to order-0 > - remove folio_add_new_anon_rmap_range() patch > - instead add its new functionality to folio_add_new_anon_rmap() > - remove batch-zap pte mappings optimization patch > - remove enabler folio_remove_rmap_range() patch too > - These offer real perf improvement so will submit separately > - simplify Kconfig > - single FLEXIBLE_THP option, which is independent of arch > - depends on TRANSPARENT_HUGEPAGE > - when enabled default to max anon folio size of 64K unless arch > explicitly overrides > - simplify changes to do_anonymous_page(): > - no more retry loop > > > [1] https://lore.kernel.org/linux-mm/20230626171430.3167004-1-ryan.roberts@arm.com/ > [2] https://lore.kernel.org/linux-mm/20230703135330.1865927-1-ryan.roberts@arm.com/ > [3] https://lore.kernel.org/linux-mm/20230714160407.4142030-1-ryan.roberts@arm.com/ > [4] https://lore.kernel.org/linux-mm/20230726095146.2826796-1-ryan.roberts@arm.com/ > [5] https://lore.kernel.org/linux-mm/20230810142942.3169679-1-ryan.roberts@arm.com/ > [6] https://lore.kernel.org/linux-mm/20230929114421.3761121-1-ryan.roberts@arm.com/ > [7] https://lore.kernel.org/linux-mm/20231122162950.3854897-1-ryan.roberts@arm.com/ > [8] https://lore.kernel.org/linux-mm/6d89fdc9-ef55-d44e-bf12-fafff318aef8@redhat.com/ > [9] https://drive.google.com/file/d/1GnfYFpr7_c1kA41liRUW5YtCb8Cj18Ud/view?usp=sharing&resourcekey=0-U1Mj3-RhLD1JV6EThpyPyA > [10] https://lore.kernel.org/linux-mm/20231113170157.280181-1-zi.yan@sent.com/ > [11] https://lore.kernel.org/linux-mm/c507308d-bdd4-5f9e-d4ff-e96e4520be85@nvidia.com/ > [12] https://lore.kernel.org/linux-mm/479b3e2b-456d-46c1-9677-38f6c95a0be8@huawei.com/ > > > Thanks, > Ryan > > Ryan Roberts (10): > mm: Allow deferred splitting of arbitrary anon large folios > mm: Non-pmd-mappable, large folios for folio_add_new_anon_rmap() > mm: thp: Introduce multi-size THP sysfs interface > mm: thp: Support allocation of anonymous multi-size THP > selftests/mm/kugepaged: Restore thp settings at exit > selftests/mm: Factor out thp settings management > selftests/mm: Support multi-size THP interface in thp_settings > selftests/mm/khugepaged: Enlighten for multi-size THP > selftests/mm/cow: Generalize do_run_with_thp() helper > selftests/mm/cow: Add tests for anonymous multi-size THP > > Documentation/admin-guide/mm/transhuge.rst | 97 ++++- > Documentation/filesystems/proc.rst | 6 +- > fs/proc/task_mmu.c | 3 +- > include/linux/huge_mm.h | 116 ++++-- > mm/huge_memory.c | 268 ++++++++++++-- > mm/khugepaged.c | 20 +- > mm/memory.c | 114 +++++- > mm/page_vma_mapped.c | 3 +- > mm/rmap.c | 32 +- > tools/testing/selftests/mm/Makefile | 4 +- > tools/testing/selftests/mm/cow.c | 185 +++++++--- > tools/testing/selftests/mm/khugepaged.c | 410 ++++----------------- > tools/testing/selftests/mm/run_vmtests.sh | 2 + > tools/testing/selftests/mm/thp_settings.c | 349 ++++++++++++++++++ > tools/testing/selftests/mm/thp_settings.h | 80 ++++ > 15 files changed, 1177 insertions(+), 512 deletions(-) > create mode 100644 tools/testing/selftests/mm/thp_settings.c > create mode 100644 tools/testing/selftests/mm/thp_settings.h > > -- > 2.25.1 > Thanks Barry
On 12/4/23 02:20, Ryan Roberts wrote: > Hi All, > > A new week, a new version, a new name... This is v8 of a series to implement > multi-size THP (mTHP) for anonymous memory (previously called "small-sized THP" > and "large anonymous folios"). Matthew objected to "small huge" so hopefully > this fares better. > > The objective of this is to improve performance by allocating larger chunks of > memory during anonymous page faults: > > 1) Since SW (the kernel) is dealing with larger chunks of memory than base > pages, there are efficiency savings to be had; fewer page faults, batched PTE > and RMAP manipulation, reduced lru list, etc. In short, we reduce kernel > overhead. This should benefit all architectures. > 2) Since we are now mapping physically contiguous chunks of memory, we can take > advantage of HW TLB compression techniques. A reduction in TLB pressure > speeds up kernel and user space. arm64 systems have 2 mechanisms to coalesce > TLB entries; "the contiguous bit" (architectural) and HPA (uarch). > > This version changes the name and tidies up some of the kernel code and test > code, based on feedback against v7 (see change log for details). Using a couple of Armv8 systems, I've tested this patchset. I applied it to top of tree (Linux 6.7-rc4), on top of your latest contig pte series [1]. With those two patchsets applied, the mm selftests look OK--or at least as OK as they normally do. I compared test runs between THP/mTHP set to "always", vs "never", to verify that there were no new test failures. Details: specifically, I set one particular page size (2 MB) to "inherit", and then toggled /sys/kernel/mm/transparent_hugepage/enabled between "always" and "never". I also re-ran my usual compute/AI benchmark, and I'm still seeing the same 10x performance improvement that I reported for the v6 patchset. So for this patchset and for [1] as well, please feel free to add: Tested-by: John Hubbard <jhubbard@nvidia.com> [1] https://lore.kernel.org/all/20231204105440.61448-1-ryan.roberts@arm.com/ thanks,
On 04/12/2023 19:30, Andrew Morton wrote: > On Mon, 4 Dec 2023 10:20:17 +0000 Ryan Roberts <ryan.roberts@arm.com> wrote: > >> Hi All, >> >> >> Prerequisites >> ============= >> >> Some work items identified as being prerequisites are listed on page 3 at [9]. >> The summary is: >> >> | item | status | >> |:------------------------------|:------------------------| >> | mlock | In mainline (v6.7) | >> | madvise | In mainline (v6.6) | >> | compaction | v1 posted [10] | >> | numa balancing | Investigated: see below | >> | user-triggered page migration | In mainline (v6.7) | >> | khugepaged collapse | In mainline (NOP) | > > What does "prerequisites" mean here? Won't compile without? Kernel > crashes without? Nice-to-have-after? Please expand on this. Short answer: It's supposed to mean things that either need to be done to prevent the mm from regressing (both correctness and performance) when multi-size THP is present but disabled, or things that need to be done to make the mm robust (but not neccessarily optimially performant) when multi-size THP is enabled. But in reality, all of the things on the list could really be reclassified as "nice-to-have-after", IMHO; their absence will neither cause compilation nor runtime errors. Longer answer: When I first started looking at this, I was advised that there were likely a number of corners which made assumptions about large folios always being PMD-sized, and if not found and fixed, could lead to stability issues. At the time I was also pursuing a strategy of multi-size THP being a compile-time feature with no runtime control, so I decided it was important for multi-size THP to not effectively disable other features (e.g. various madvise ops used to ignore PTE-mapped large folios). This list represents all the things that I could find based on code review, as well as things suggested by others, and in the end, they all fall into that last category of "PTE-mapped large folios efectively disable existing features". But given we now have runtime controls to opt-in to multi-size THP, I'm not sure we need to classify these as prerequisites. But I didn't want to unilaterally make that decision, given this list has previously been discussed and agreed by others. It's also worth noting that in the case of compaction, that's already a problem for large folios in the page cache; large folios will be skipped. > > I looked at [9], but access is denied. Sorry about that; its owned by David Rientjes so I can't fix that for you. It's a PDF of a slide with the following table: +-------------------------------+------------------------------------------------------------------------+--------------+--------------------+ | Item | Description | Assignee | Status | +-------------------------------+------------------------------------------------------------------------+--------------+--------------------+ | mlock | Large, pte-mapped folios are ignored when mlock is requested. | Yin, Fengwei | In mainline (v6.7) | | | Code comment for mlock_vma_folio() says "...filter out pte mappings | | | | | of THPs which cannot be consistently counted: a pte mapping of the | | | | | THP head cannot be distinguished by the page alone." | | | | madvise | MADV_COLD, MADV_PAGEOUT, MADV_FREE: For large folios, code assumes | Yin, Fengwei | In mainline (v6.6) | | | exclusive only if mapcount==1, else skips remainder of operation. | | | | | For large, pte-mapped folios, exclusive folios can have mapcount | | | | | upto nr_pages and still be exclusive. Even better; don't split | | | | | the folio if it fits entirely within the range. | | | | compaction | Raised at LSFMM: Compaction skips non-order-0 pages. | Zi Yan | v1 posted | | | Already problem for page-cache pages today. | | | | numa balancing | Large, pte-mapped folios are ignored by numa-balancing code. Commit | John Hubbard | Investigated: | | | comment (e81c480): "We're going to have THP mapped with PTEs. It | | Not prerequisite | | | will confuse numabalancing. Let's skip them for now." | | | | user-triggered page migration | mm/migrate.c (migrate_pages syscall) We don't want to migrate folio | Kefeng Wang | In mainline (v6.7) | | | that is shared. | | | | khugepaged collapse | collapse small-sized THP to PMD-sized THP in khugepaged/MADV_COLLAPSE. | Ryan Roberts | In mainline (NOP) | | | Kirill thinks khugepage should already be able to collapse | | | | | small large folios to PMD-sized THP; verification required. | | | +-------------------------------+------------------------------------------------------------------------+--------------+--------------------+ Thanks, Ryan > >> [9] https://drive.google.com/file/d/1GnfYFpr7_c1kA41liRUW5YtCb8Cj18Ud/view?usp=sharing&resourcekey=0-U1Mj3-RhLD1JV6EThpyPyA > >
On 05/12/2023 03:28, Barry Song wrote: > On Mon, Dec 4, 2023 at 11:20 PM Ryan Roberts <ryan.roberts@arm.com> wrote: >> >> Hi All, >> >> A new week, a new version, a new name... This is v8 of a series to implement >> multi-size THP (mTHP) for anonymous memory (previously called "small-sized THP" >> and "large anonymous folios"). Matthew objected to "small huge" so hopefully >> this fares better. >> >> The objective of this is to improve performance by allocating larger chunks of >> memory during anonymous page faults: >> >> 1) Since SW (the kernel) is dealing with larger chunks of memory than base >> pages, there are efficiency savings to be had; fewer page faults, batched PTE >> and RMAP manipulation, reduced lru list, etc. In short, we reduce kernel >> overhead. This should benefit all architectures. >> 2) Since we are now mapping physically contiguous chunks of memory, we can take >> advantage of HW TLB compression techniques. A reduction in TLB pressure >> speeds up kernel and user space. arm64 systems have 2 mechanisms to coalesce >> TLB entries; "the contiguous bit" (architectural) and HPA (uarch). >> >> This version changes the name and tidies up some of the kernel code and test >> code, based on feedback against v7 (see change log for details). >> >> By default, the existing behaviour (and performance) is maintained. The user >> must explicitly enable multi-size THP to see the performance benefit. This is >> done via a new sysfs interface (as recommended by David Hildenbrand - thanks to >> David for the suggestion)! This interface is inspired by the existing >> per-hugepage-size sysfs interface used by hugetlb, provides full backwards >> compatibility with the existing PMD-size THP interface, and provides a base for >> future extensibility. See [8] for detailed discussion of the interface. >> >> This series is based on mm-unstable (715b67adf4c8). >> >> >> Prerequisites >> ============= >> >> Some work items identified as being prerequisites are listed on page 3 at [9]. >> The summary is: >> >> | item | status | >> |:------------------------------|:------------------------| >> | mlock | In mainline (v6.7) | >> | madvise | In mainline (v6.6) | >> | compaction | v1 posted [10] | >> | numa balancing | Investigated: see below | >> | user-triggered page migration | In mainline (v6.7) | >> | khugepaged collapse | In mainline (NOP) | >> >> On NUMA balancing, which currently ignores any PTE-mapped THPs it encounters, >> John Hubbard has investigated this and concluded that it is A) not clear at the >> moment what a better policy might be for PTE-mapped THP and B) questions whether >> this should really be considered a prerequisite given no regression is caused >> for the default "multi-size THP disabled" case, and there is no correctness >> issue when it is enabled - its just a potential for non-optimal performance. >> >> If there are no disagreements about removing numa balancing from the list (none >> were raised when I first posted this comment against v7), then that just leaves >> compaction which is in review on list at the moment. >> >> I really would like to get this series (and its remaining comapction >> prerequisite) in for v6.8. I accept that it may be a bit optimistic at this >> point, but lets see where we get to with review? >> > > Hi Ryan, > > A question but i don't think it should block this series, do we have any plan > to extend /proc/meminfo, /proc/pid/smaps, /proc/vmstat to present some > information regarding the new multi-size THP. > > e.g how many folios in each-size for the system, how many multi-size folios LRU, > how many large folios in each VMA etc. > > In products and labs, we need some health monitors to make sure the system > status is visible and works as expected. right now, i feel i am like > blindly exploring > the system without those statistics. Yes it's definitely on the list. I had a patch in v6 that added various stats. But after discussion with David, it became clear there were a few issues with the implementation and I ripped it out. We also decided that since the page cache already uses large folios and we don't have counters for those, we could probably live (initially at least) without counters for multi-size THP too. But you are the second person to raise this in as many weeks, so clearly this should be at the top of the list for enhancements after this initial merge. For now, you can parse /proc/<pid>/pagemap to see how well multi-size THP is being utilized. That's not a simple interface though. Yu Zhou shared a Python script a while back. I wonder if there is value in tidying that up and putting it in tools/mm in the short term? There were 2 main issues with the previous implementation: 1) What should the semantic of a counter be? Because a PTE-mapped THP can be partially unmapped or mremapped. So should we count number of pages from a folio of a given size that are mapped (easy) or should we only count when the whole folio is contiguously mapped? (I'm sure there are many other semantics we could consider). The latter is not easy to spot at the moment - perhaps the work David has been doing on tidying up the rmap functions might help? 2) How should we expose the info? There has been pushback for extending files in sysfs that expose multiple pieces of data, so David has suggested that in the long term it might be good to completely redesign the stats interface. It's certainly something that needs a lot more discussion - input encouraged! Thanks, Ryan > >> >> Testing >> ======= >> >> The series includes patches for mm selftests to enlighten the cow and khugepaged >> tests to explicitly test with multi-size THP, in the same way that PMD-sized >> THP is tested. The new tests all pass, and no regressions are observed in the mm >> selftest suite. I've also run my usual kernel compilation and java script >> benchmarks without any issues. >> >> Refer to my performance numbers posted with v6 [6]. (These are for multi-size >> THP only - they do not include the arm64 contpte follow-on series). >> >> John Hubbard at Nvidia has indicated dramatic 10x performance improvements for >> some workloads at [11]. (Observed using v6 of this series as well as the arm64 >> contpte series). >> >> Kefeng Wang at Huawei has also indicated he sees improvements at [12] although >> there are some latency regressions also. >> >> >> Changes since v7 [7] >> ==================== >> >> - Renamed "small-sized THP" -> "multi-size THP" in commit logs >> - Added various Reviewed-by/Tested-by tags (Barry, David, Alistair) >> - Patch 3: >> - Fine-tuned transhuge documentation multi-size THP (JohnH) >> - Converted hugepage_global_enabled() and hugepage_global_always() macros >> to static inline functions (JohnH) >> - Renamed hugepage_vma_check() to thp_vma_allowable_orders() (JohnH) >> - Renamed transhuge_vma_suitable() to thp_vma_suitable_orders() (JohnH) >> - Renamed "global" enabled sysfs file option to "inherit" (JohnH) >> - Patch 9: >> - cow selftest: Renamed param size -> thpsize (David) >> - cow selftest: Changed test fail to assert() (David) >> - cow selftest: Log PMD size separately from all the supported THP sizes >> (David) >> - Patch 10: >> - cow selftest: No longer special case pmdsize; keep all THP sizes in >> thpsizes[] >> >> >> Changes since v6 [6] >> ==================== >> >> - Refactored vmf_pte_range_changed() to remove uffd special-case (suggested by >> JohnH) >> - Dropped accounting patch (#3 in v6) (suggested by DavidH) >> - Continue to account *PMD-sized* THP only for now >> - Can add more counters in future if needed >> - Page cache large folios haven't needed any new counters yet >> - Pivot to sysfs ABI proposed by DavidH >> - per-size directories in a similar shape to that used by hugetlb >> - Dropped "recommend" keyword patch (#6 in v6) (suggested by DavidH, Yu Zhou) >> - For now, users need to understand implicitly which sizes are beneficial >> to their HW/SW >> - Dropped arch_wants_pte_order() patch (#7 in v6) >> - No longer needed due to dropping patch "recommend" keyword patch >> - Enlightened khugepaged mm selftest to explicitly test with small-size THP >> - Scrubbed commit logs to use "small-sized THP" consistently (suggested by >> DavidH) >> >> >> Changes since v5 [5] >> ==================== >> >> - Added accounting for PTE-mapped THPs (patch 3) >> - Added runtime control mechanism via sysfs as extension to THP (patch 4) >> - Minor refactoring of alloc_anon_folio() to integrate with runtime controls >> - Stripped out hardcoded policy for allocation order; its now all user space >> controlled (although user space can request "recommend" which will configure >> the HW-preferred order) >> >> >> Changes since v4 [4] >> ==================== >> >> - Removed "arm64: mm: Override arch_wants_pte_order()" patch; arm64 >> now uses the default order-3 size. I have moved this patch over to >> the contpte series. >> - Added "mm: Allow deferred splitting of arbitrary large anon folios" back >> into series. I originally removed this at v2 to add to a separate series, >> but that series has transformed significantly and it no longer fits, so >> bringing it back here. >> - Reintroduced dependency on set_ptes(); Originally dropped this at v2, but >> set_ptes() is in mm-unstable now. >> - Updated policy for when to allocate LAF; only fallback to order-0 if >> MADV_NOHUGEPAGE is present or if THP disabled via prctl; no longer rely on >> sysfs's never/madvise/always knob. >> - Fallback to order-0 whenever uffd is armed for the vma, not just when >> uffd-wp is set on the pte. >> - alloc_anon_folio() now returns `struct folio *`, where errors are encoded >> with ERR_PTR(). >> >> The last 3 changes were proposed by Yu Zhao - thanks! >> >> >> Changes since v3 [3] >> ==================== >> >> - Renamed feature from FLEXIBLE_THP to LARGE_ANON_FOLIO. >> - Removed `flexthp_unhinted_max` boot parameter. Discussion concluded that a >> sysctl is preferable but we will wait until real workload needs it. >> - Fixed uninitialized `addr` on read fault path in do_anonymous_page(). >> - Added mm selftests for large anon folios in cow test suite. >> >> >> Changes since v2 [2] >> ==================== >> >> - Dropped commit "Allow deferred splitting of arbitrary large anon folios" >> - Huang, Ying suggested the "batch zap" work (which I dropped from this >> series after v1) is a prerequisite for merging FLXEIBLE_THP, so I've >> moved the deferred split patch to a separate series along with the batch >> zap changes. I plan to submit this series early next week. >> - Changed folio order fallback policy >> - We no longer iterate from preferred to 0 looking for acceptable policy >> - Instead we iterate through preferred, PAGE_ALLOC_COSTLY_ORDER and 0 only >> - Removed vma parameter from arch_wants_pte_order() >> - Added command line parameter `flexthp_unhinted_max` >> - clamps preferred order when vma hasn't explicitly opted-in to THP >> - Never allocate large folio for MADV_NOHUGEPAGE vma (or when THP is disabled >> for process or system). >> - Simplified implementation and integration with do_anonymous_page() >> - Removed dependency on set_ptes() >> >> >> Changes since v1 [1] >> ==================== >> >> - removed changes to arch-dependent vma_alloc_zeroed_movable_folio() >> - replaced with arch-independent alloc_anon_folio() >> - follows THP allocation approach >> - no longer retry with intermediate orders if allocation fails >> - fallback directly to order-0 >> - remove folio_add_new_anon_rmap_range() patch >> - instead add its new functionality to folio_add_new_anon_rmap() >> - remove batch-zap pte mappings optimization patch >> - remove enabler folio_remove_rmap_range() patch too >> - These offer real perf improvement so will submit separately >> - simplify Kconfig >> - single FLEXIBLE_THP option, which is independent of arch >> - depends on TRANSPARENT_HUGEPAGE >> - when enabled default to max anon folio size of 64K unless arch >> explicitly overrides >> - simplify changes to do_anonymous_page(): >> - no more retry loop >> >> >> [1] https://lore.kernel.org/linux-mm/20230626171430.3167004-1-ryan.roberts@arm.com/ >> [2] https://lore.kernel.org/linux-mm/20230703135330.1865927-1-ryan.roberts@arm.com/ >> [3] https://lore.kernel.org/linux-mm/20230714160407.4142030-1-ryan.roberts@arm.com/ >> [4] https://lore.kernel.org/linux-mm/20230726095146.2826796-1-ryan.roberts@arm.com/ >> [5] https://lore.kernel.org/linux-mm/20230810142942.3169679-1-ryan.roberts@arm.com/ >> [6] https://lore.kernel.org/linux-mm/20230929114421.3761121-1-ryan.roberts@arm.com/ >> [7] https://lore.kernel.org/linux-mm/20231122162950.3854897-1-ryan.roberts@arm.com/ >> [8] https://lore.kernel.org/linux-mm/6d89fdc9-ef55-d44e-bf12-fafff318aef8@redhat.com/ >> [9] https://drive.google.com/file/d/1GnfYFpr7_c1kA41liRUW5YtCb8Cj18Ud/view?usp=sharing&resourcekey=0-U1Mj3-RhLD1JV6EThpyPyA >> [10] https://lore.kernel.org/linux-mm/20231113170157.280181-1-zi.yan@sent.com/ >> [11] https://lore.kernel.org/linux-mm/c507308d-bdd4-5f9e-d4ff-e96e4520be85@nvidia.com/ >> [12] https://lore.kernel.org/linux-mm/479b3e2b-456d-46c1-9677-38f6c95a0be8@huawei.com/ >> >> >> Thanks, >> Ryan >> >> Ryan Roberts (10): >> mm: Allow deferred splitting of arbitrary anon large folios >> mm: Non-pmd-mappable, large folios for folio_add_new_anon_rmap() >> mm: thp: Introduce multi-size THP sysfs interface >> mm: thp: Support allocation of anonymous multi-size THP >> selftests/mm/kugepaged: Restore thp settings at exit >> selftests/mm: Factor out thp settings management >> selftests/mm: Support multi-size THP interface in thp_settings >> selftests/mm/khugepaged: Enlighten for multi-size THP >> selftests/mm/cow: Generalize do_run_with_thp() helper >> selftests/mm/cow: Add tests for anonymous multi-size THP >> >> Documentation/admin-guide/mm/transhuge.rst | 97 ++++- >> Documentation/filesystems/proc.rst | 6 +- >> fs/proc/task_mmu.c | 3 +- >> include/linux/huge_mm.h | 116 ++++-- >> mm/huge_memory.c | 268 ++++++++++++-- >> mm/khugepaged.c | 20 +- >> mm/memory.c | 114 +++++- >> mm/page_vma_mapped.c | 3 +- >> mm/rmap.c | 32 +- >> tools/testing/selftests/mm/Makefile | 4 +- >> tools/testing/selftests/mm/cow.c | 185 +++++++--- >> tools/testing/selftests/mm/khugepaged.c | 410 ++++----------------- >> tools/testing/selftests/mm/run_vmtests.sh | 2 + >> tools/testing/selftests/mm/thp_settings.c | 349 ++++++++++++++++++ >> tools/testing/selftests/mm/thp_settings.h | 80 ++++ >> 15 files changed, 1177 insertions(+), 512 deletions(-) >> create mode 100644 tools/testing/selftests/mm/thp_settings.c >> create mode 100644 tools/testing/selftests/mm/thp_settings.h >> >> -- >> 2.25.1 >> > > Thanks > Barry
On 05/12/2023 03:37, John Hubbard wrote: > On 12/4/23 02:20, Ryan Roberts wrote: >> Hi All, >> >> A new week, a new version, a new name... This is v8 of a series to implement >> multi-size THP (mTHP) for anonymous memory (previously called "small-sized THP" >> and "large anonymous folios"). Matthew objected to "small huge" so hopefully >> this fares better. >> >> The objective of this is to improve performance by allocating larger chunks of >> memory during anonymous page faults: >> >> 1) Since SW (the kernel) is dealing with larger chunks of memory than base >> pages, there are efficiency savings to be had; fewer page faults, batched PTE >> and RMAP manipulation, reduced lru list, etc. In short, we reduce kernel >> overhead. This should benefit all architectures. >> 2) Since we are now mapping physically contiguous chunks of memory, we can take >> advantage of HW TLB compression techniques. A reduction in TLB pressure >> speeds up kernel and user space. arm64 systems have 2 mechanisms to coalesce >> TLB entries; "the contiguous bit" (architectural) and HPA (uarch). >> >> This version changes the name and tidies up some of the kernel code and test >> code, based on feedback against v7 (see change log for details). > > Using a couple of Armv8 systems, I've tested this patchset. I applied it > to top of tree (Linux 6.7-rc4), on top of your latest contig pte series > [1]. > > With those two patchsets applied, the mm selftests look OK--or at least > as OK as they normally do. I compared test runs between THP/mTHP set to > "always", vs "never", to verify that there were no new test failures. > Details: specifically, I set one particular page size (2 MB) to > "inherit", and then toggled /sys/kernel/mm/transparent_hugepage/enabled > between "always" and "never". Excellent - I'm guessing this was for 64K base pages? > > I also re-ran my usual compute/AI benchmark, and I'm still seeing the > same 10x performance improvement that I reported for the v6 patchset. > > So for this patchset and for [1] as well, please feel free to add: > > Tested-by: John Hubbard <jhubbard@nvidia.com> Thanks! > > > [1] https://lore.kernel.org/all/20231204105440.61448-1-ryan.roberts@arm.com/ > > > thanks,
On 2023/12/4 18:20, Ryan Roberts wrote: > Hi All, > > A new week, a new version, a new name... This is v8 of a series to implement > multi-size THP (mTHP) for anonymous memory (previously called "small-sized THP" > and "large anonymous folios"). Matthew objected to "small huge" so hopefully > this fares better. > > The objective of this is to improve performance by allocating larger chunks of > memory during anonymous page faults: > > 1) Since SW (the kernel) is dealing with larger chunks of memory than base > pages, there are efficiency savings to be had; fewer page faults, batched PTE > and RMAP manipulation, reduced lru list, etc. In short, we reduce kernel > overhead. This should benefit all architectures. > 2) Since we are now mapping physically contiguous chunks of memory, we can take > advantage of HW TLB compression techniques. A reduction in TLB pressure > speeds up kernel and user space. arm64 systems have 2 mechanisms to coalesce > TLB entries; "the contiguous bit" (architectural) and HPA (uarch). > > This version changes the name and tidies up some of the kernel code and test > code, based on feedback against v7 (see change log for details). > > By default, the existing behaviour (and performance) is maintained. The user > must explicitly enable multi-size THP to see the performance benefit. This is > done via a new sysfs interface (as recommended by David Hildenbrand - thanks to > David for the suggestion)! This interface is inspired by the existing > per-hugepage-size sysfs interface used by hugetlb, provides full backwards > compatibility with the existing PMD-size THP interface, and provides a base for > future extensibility. See [8] for detailed discussion of the interface. > > This series is based on mm-unstable (715b67adf4c8). > > > Prerequisites > ============= > > Some work items identified as being prerequisites are listed on page 3 at [9]. > The summary is: > > | item | status | > |:------------------------------|:------------------------| > | mlock | In mainline (v6.7) | > | madvise | In mainline (v6.6) | > | compaction | v1 posted [10] | > | numa balancing | Investigated: see below | > | user-triggered page migration | In mainline (v6.7) | > | khugepaged collapse | In mainline (NOP) | > > On NUMA balancing, which currently ignores any PTE-mapped THPs it encounters, > John Hubbard has investigated this and concluded that it is A) not clear at the > moment what a better policy might be for PTE-mapped THP and B) questions whether > this should really be considered a prerequisite given no regression is caused > for the default "multi-size THP disabled" case, and there is no correctness > issue when it is enabled - its just a potential for non-optimal performance. > > If there are no disagreements about removing numa balancing from the list (none > were raised when I first posted this comment against v7), then that just leaves > compaction which is in review on list at the moment. > > I really would like to get this series (and its remaining comapction > prerequisite) in for v6.8. I accept that it may be a bit optimistic at this > point, but lets see where we get to with review? > > > Testing > ======= > > The series includes patches for mm selftests to enlighten the cow and khugepaged > tests to explicitly test with multi-size THP, in the same way that PMD-sized > THP is tested. The new tests all pass, and no regressions are observed in the mm > selftest suite. I've also run my usual kernel compilation and java script > benchmarks without any issues. > > Refer to my performance numbers posted with v6 [6]. (These are for multi-size > THP only - they do not include the arm64 contpte follow-on series). > > John Hubbard at Nvidia has indicated dramatic 10x performance improvements for > some workloads at [11]. (Observed using v6 of this series as well as the arm64 > contpte series). > > Kefeng Wang at Huawei has also indicated he sees improvements at [12] although > there are some latency regressions also. Hi Ryan, Here is some test results based on v6.7-rc1 + [PATCH v7 00/10] Small-sized THP for anonymous memory + [PATCH v2 00/14] Transparent Contiguous PTEs for User Mappings case1: basepage 64K case2: basepage 4K + thp=64k + PAGE_ALLOC_COSTLY_ORDER = 3 case3: basepage 4K + thp=64k + PAGE_ALLOC_COSTLY_ORDER = 4 The results is compared with basepage 4K on Kunpeng920. Note, - The test based on ext4 filesystem and THP=2M is disabled. - The results were not analyzed, it is for reference only, as some values of test items are not consistent. 1) Unixbench 1core Index_Values_1core case1 case2 case3 Dhrystone_2_using_register_variables 0.28% 0.39% 0.17% Double-Precision_Whetstone -0.01% 0.00% 0.00% Execl_Throughput *21.13%* 2.16% 3.01% File_Copy_1024_bufsize_2000_maxblocks -0.51% *8.33%* *8.76%* File_Copy_256_bufsize_500_maxblocks 0.78% *11.89%* *10.85%* File_Copy_4096_bufsize_8000_maxblocks 7.42% 7.27% *10.66%* Pipe_Throughput -0.24% *6.82%* *5.08%* Pipe-based_Context_Switching 1.38% *13.49%* *9.91%* Process_Creation *32.46%* 4.30% *8.54%* Shell_Scripts_(1_concurrent) *31.67%* 1.92% 2.60% Shell_Scripts_(8_concurrent) *40.59%* 1.30% *5.29%* System_Call_Overhead 3.92% *8.13% 2.96% System_Benchmarks_Index_Score 10.66% 5.39% 5.58% For 1core, - case1 wins on Execl_Throughput/Process_Creation/Shell_Scripts a lot, and score higher 10.66% vs basepage 4K. - case2/3 wins on File_Copy/Pipe and score higher 5%+ than basepage 4K, also case3 looks better on Shell_Scripts_(8_concurrent) than case2. 2) Unixbench 128core Index_Values_128core case1 case2 case3 Dhrystone_2_using_register_variables 2.07% -0.03% -0.11% Double-Precision_Whetstone -0.03% 0.00% 0.00% Execl_Throughput *39.28%* -4.23% 1.93% File_Copy_1024_bufsize_2000_maxblocks 5.46% 1.30% 4.20% File_Copy_256_bufsize_500_maxblocks -8.89% *6.56% *5.02%* File_Copy_4096_bufsize_8000_maxblocks 3.43% *-5.46%* 0.56% Pipe_Throughput 3.80% *7.69% *7.80%* Pipe-based_Context_Switching *7.62%* 0.95% 4.69% Process_Creation *28.11%* -2.79% 2.40% Shell_Scripts_(1_concurrent) *39.68%* 1.86% *5.30%* Shell_Scripts_(8_concurrent) *41.35%* 2.49% *7.16%* System_Call_Overhead -1.55% -0.04% *8.23%* System_Benchmarks_Index_Score 12.08% 0.63% 3.88% For 128core, - case1 wins on Execl_Throughput/Process_Creation/Shell_Scripts a lot, also good at Pipe-based_Context_Switching, and score higher 12.08% vs basepage 4K. - case2/case3 wins on File_Copy_256/Pipe_Throughput, but case2 is not better than basepage 4K, case3 wins 3.88%. 3) Lmbench Processor_processes Processor_Processes case1 case2 case3 null_call 1.76% 0.40% 0.65% null_io -0.76% -0.38% -0.23% stat *-16.09%* *-12.49%* 4.22% open_close -2.69% 4.51% 3.21% slct_TCP -0.56% 0.00% -0.44% sig_inst -1.54% 0.73% 0.70% sig_hndl -2.85% 0.01% 1.85% fork_proc *23.31%* 8.77% -5.42% exec_proc *13.22%* -0.30% 1.09% sh_proc *14.04%* -0.10% 1.09% - case1 is much better than basepage 4K, same as Unixbench test, case2 is better on fork_proc, but case3 is worse - note: the variance of fork/exec/sh is bigger than others 4) Lmbench Context_switching_ctxsw Context_switching_ctxsw case1 case2 case3 2p/0K -12.16% -5.29% -1.86% 2p/16K -11.26% -3.71% -4.53% 2p/64K -2.60% 3.84% -1.98% 8p/16K -7.56% -1.21% -0.88% 8p/64K 5.10% 4.88% 1.19% 16p/16K -5.81% -2.44% -3.84% 16p/64K 4.29% -1.94% -2.50% - case1/2/3 worse than basepage 4K and case1 is the worst. 4) Lmbench Local_latencies Local_latencies case1 case2 case3 Pipe -9.23% 0.58% -4.34% AF_UNIX -5.34% -1.76% 3.03% UDP -6.70% -5.96% -9.81% TCP -7.95% -7.58% -5.63% TCP_conn -213.99% -227.78% -659.67% - TCP_conn is very unreliable, ignore it - case1/2/3 slower than basepage 4K 5) Lmbench File_&_VM_latencies File_&_VM_latencies case1 case2 case3 10K_File_Create 2.60% -0.52% 2.66% 10K_File_Delete -2.91% -5.20% -2.11% 10K_File_Create 10.23% 1.18% 0.12% 10K_File_Delete -17.76% -2.97% -1.49% Mmap_Latency *63.05%* 2.57% -0.96% Prot_Fault 10.41% -3.21% *-19.11%* Page_Fault *-132.01%* 2.35% -0.79% 100fd_selct -1.20% 0.10% 0.31% - case1 is very good at Mmap_Latency and not good at Page_fault - case2/3 slower on Prot_Faul/10K_FILE_Delete vs basepage 4k, the rest doesn't look much different. 6) Lmbench Local_bandwidths Local_bandwidths case1 case2 case3 Pipe 265.22% 15.44% 11.33% AF_UNIX 13.41% -2.66% 2.63% TCP -1.30% 25.90% 2.48% File_reread 14.79% 31.52% -14.16% Mmap_reread 27.47% 49.00% -0.11% Bcopy(libc) 2.58% 2.45% 2.46% Bcopy(hand) 25.78% 22.56% 22.68% Mem_read 38.26% 36.80% 36.49% Mem_write 10.93% 3.44% 3.12% - case1 is very good at bandwidth, case2 is better than basepage 4k but lower than case1, case3 is bad at File_reread 7)Lmbench Memory_latencies Memory_latencies case1 case2 case3 L1_$ 0.02% 0.00% -0.03% L2_$ -1.56% -2.65% -1.25% Main_mem 50.82% 32.51% 33.47% Rand_mem 15.29% -8.79% -8.80% - case1 also good at Main/Rand mem access latencies, - case2/case3 is better at Main_mem, but worse at Rand_mem. Tested-by: Kefeng Wang <wangkefeng.wang@huawei.com>
On 04.12.23 11:20, Ryan Roberts wrote: > Hi All, > > A new week, a new version, a new name... This is v8 of a series to implement > multi-size THP (mTHP) for anonymous memory (previously called "small-sized THP" > and "large anonymous folios"). Matthew objected to "small huge" so hopefully > this fares better. > > The objective of this is to improve performance by allocating larger chunks of > memory during anonymous page faults: > > 1) Since SW (the kernel) is dealing with larger chunks of memory than base > pages, there are efficiency savings to be had; fewer page faults, batched PTE > and RMAP manipulation, reduced lru list, etc. In short, we reduce kernel > overhead. This should benefit all architectures. > 2) Since we are now mapping physically contiguous chunks of memory, we can take > advantage of HW TLB compression techniques. A reduction in TLB pressure > speeds up kernel and user space. arm64 systems have 2 mechanisms to coalesce > TLB entries; "the contiguous bit" (architectural) and HPA (uarch). > > This version changes the name and tidies up some of the kernel code and test > code, based on feedback against v7 (see change log for details). > > By default, the existing behaviour (and performance) is maintained. The user > must explicitly enable multi-size THP to see the performance benefit. This is > done via a new sysfs interface (as recommended by David Hildenbrand - thanks to > David for the suggestion)! This interface is inspired by the existing > per-hugepage-size sysfs interface used by hugetlb, provides full backwards > compatibility with the existing PMD-size THP interface, and provides a base for > future extensibility. See [8] for detailed discussion of the interface. > > This series is based on mm-unstable (715b67adf4c8). I took a look at the core pieces. Some things might want some smaller tweaks, but nothing that should stop this from having fun in mm-unstable, and replacing the smaller things as we move forward.
On 12/5/23 3:13 AM, Ryan Roberts wrote: > On 05/12/2023 03:37, John Hubbard wrote: >> On 12/4/23 02:20, Ryan Roberts wrote: ... >> With those two patchsets applied, the mm selftests look OK--or at least >> as OK as they normally do. I compared test runs between THP/mTHP set to >> "always", vs "never", to verify that there were no new test failures. >> Details: specifically, I set one particular page size (2 MB) to >> "inherit", and then toggled /sys/kernel/mm/transparent_hugepage/enabled >> between "always" and "never". > > Excellent - I'm guessing this was for 64K base pages? Yes. thanks,
On 05/12/2023 14:19, Kefeng Wang wrote: > > > On 2023/12/4 18:20, Ryan Roberts wrote: >> Hi All, >> >> A new week, a new version, a new name... This is v8 of a series to implement >> multi-size THP (mTHP) for anonymous memory (previously called "small-sized THP" >> and "large anonymous folios"). Matthew objected to "small huge" so hopefully >> this fares better. >> >> The objective of this is to improve performance by allocating larger chunks of >> memory during anonymous page faults: >> >> 1) Since SW (the kernel) is dealing with larger chunks of memory than base >> pages, there are efficiency savings to be had; fewer page faults, batched PTE >> and RMAP manipulation, reduced lru list, etc. In short, we reduce kernel >> overhead. This should benefit all architectures. >> 2) Since we are now mapping physically contiguous chunks of memory, we can take >> advantage of HW TLB compression techniques. A reduction in TLB pressure >> speeds up kernel and user space. arm64 systems have 2 mechanisms to coalesce >> TLB entries; "the contiguous bit" (architectural) and HPA (uarch). >> >> This version changes the name and tidies up some of the kernel code and test >> code, based on feedback against v7 (see change log for details). >> >> By default, the existing behaviour (and performance) is maintained. The user >> must explicitly enable multi-size THP to see the performance benefit. This is >> done via a new sysfs interface (as recommended by David Hildenbrand - thanks to >> David for the suggestion)! This interface is inspired by the existing >> per-hugepage-size sysfs interface used by hugetlb, provides full backwards >> compatibility with the existing PMD-size THP interface, and provides a base for >> future extensibility. See [8] for detailed discussion of the interface. >> >> This series is based on mm-unstable (715b67adf4c8). >> >> >> Prerequisites >> ============= >> >> Some work items identified as being prerequisites are listed on page 3 at [9]. >> The summary is: >> >> | item | status | >> |:------------------------------|:------------------------| >> | mlock | In mainline (v6.7) | >> | madvise | In mainline (v6.6) | >> | compaction | v1 posted [10] | >> | numa balancing | Investigated: see below | >> | user-triggered page migration | In mainline (v6.7) | >> | khugepaged collapse | In mainline (NOP) | >> >> On NUMA balancing, which currently ignores any PTE-mapped THPs it encounters, >> John Hubbard has investigated this and concluded that it is A) not clear at the >> moment what a better policy might be for PTE-mapped THP and B) questions whether >> this should really be considered a prerequisite given no regression is caused >> for the default "multi-size THP disabled" case, and there is no correctness >> issue when it is enabled - its just a potential for non-optimal performance. >> >> If there are no disagreements about removing numa balancing from the list (none >> were raised when I first posted this comment against v7), then that just leaves >> compaction which is in review on list at the moment. >> >> I really would like to get this series (and its remaining comapction >> prerequisite) in for v6.8. I accept that it may be a bit optimistic at this >> point, but lets see where we get to with review? >> >> >> Testing >> ======= >> >> The series includes patches for mm selftests to enlighten the cow and khugepaged >> tests to explicitly test with multi-size THP, in the same way that PMD-sized >> THP is tested. The new tests all pass, and no regressions are observed in the mm >> selftest suite. I've also run my usual kernel compilation and java script >> benchmarks without any issues. >> >> Refer to my performance numbers posted with v6 [6]. (These are for multi-size >> THP only - they do not include the arm64 contpte follow-on series). >> >> John Hubbard at Nvidia has indicated dramatic 10x performance improvements for >> some workloads at [11]. (Observed using v6 of this series as well as the arm64 >> contpte series). >> >> Kefeng Wang at Huawei has also indicated he sees improvements at [12] although >> there are some latency regressions also. > > Hi Ryan, > > Here is some test results based on v6.7-rc1 + > [PATCH v7 00/10] Small-sized THP for anonymous memory + > [PATCH v2 00/14] Transparent Contiguous PTEs for User Mappings > > case1: basepage 64K > case2: basepage 4K + thp=64k + PAGE_ALLOC_COSTLY_ORDER = 3 > case3: basepage 4K + thp=64k + PAGE_ALLOC_COSTLY_ORDER = 4 Thanks for sharing these results. With the exception of a few outliers, It looks like the ~rough conclusion is that bandwidth improves, but not as much as 64K base pages, and latency regresses, but also not as much as 64K base pages? I expect that over time, as we add more optimizations, we will get bandwidth closer to 64K base pages; one crucial one is getting executable file-backed memory into contpte mappings, for example. It's probably not time to switch PAGE_ALLOC_COSTLY_ORDER quite yet; but something to keep an eye on and consider down the road? Thanks, Ryan > > The results is compared with basepage 4K on Kunpeng920. > > Note, > - The test based on ext4 filesystem and THP=2M is disabled. > - The results were not analyzed, it is for reference only, > as some values of test items are not consistent. > > 1) Unixbench 1core > Index_Values_1core case1 case2 case3 > Dhrystone_2_using_register_variables 0.28% 0.39% 0.17% > Double-Precision_Whetstone -0.01% 0.00% 0.00% > Execl_Throughput *21.13%* 2.16% 3.01% > File_Copy_1024_bufsize_2000_maxblocks -0.51% *8.33%* *8.76%* > File_Copy_256_bufsize_500_maxblocks 0.78% *11.89%* *10.85%* > File_Copy_4096_bufsize_8000_maxblocks 7.42% 7.27% *10.66%* > Pipe_Throughput -0.24% *6.82%* *5.08%* > Pipe-based_Context_Switching 1.38% *13.49%* *9.91%* > Process_Creation *32.46%* 4.30% *8.54%* > Shell_Scripts_(1_concurrent) *31.67%* 1.92% 2.60% > Shell_Scripts_(8_concurrent) *40.59%* 1.30% *5.29%* > System_Call_Overhead 3.92% *8.13% 2.96% > > System_Benchmarks_Index_Score 10.66% 5.39% 5.58% > > For 1core, > - case1 wins on Execl_Throughput/Process_Creation/Shell_Scripts > a lot, and score higher 10.66% vs basepage 4K. > - case2/3 wins on File_Copy/Pipe and score higher 5%+ than basepage 4K, > also case3 looks better on Shell_Scripts_(8_concurrent) than case2. > > 2) Unixbench 128core > Index_Values_128core case1 case2 case3 > Dhrystone_2_using_register_variables 2.07% -0.03% -0.11% > Double-Precision_Whetstone -0.03% 0.00% 0.00% > Execl_Throughput *39.28%* -4.23% 1.93% > File_Copy_1024_bufsize_2000_maxblocks 5.46% 1.30% 4.20% > File_Copy_256_bufsize_500_maxblocks -8.89% *6.56% *5.02%* > File_Copy_4096_bufsize_8000_maxblocks 3.43% *-5.46%* 0.56% > Pipe_Throughput 3.80% *7.69% *7.80%* > Pipe-based_Context_Switching *7.62%* 0.95% 4.69% > Process_Creation *28.11%* -2.79% 2.40% > Shell_Scripts_(1_concurrent) *39.68%* 1.86% *5.30%* > Shell_Scripts_(8_concurrent) *41.35%* 2.49% *7.16%* > System_Call_Overhead -1.55% -0.04% *8.23%* > > System_Benchmarks_Index_Score 12.08% 0.63% 3.88% > > For 128core, > - case1 wins on Execl_Throughput/Process_Creation/Shell_Scripts > a lot, also good at Pipe-based_Context_Switching, and score higher > 12.08% vs basepage 4K. > - case2/case3 wins on File_Copy_256/Pipe_Throughput, but case2 is > not better than basepage 4K, case3 wins 3.88%. > > 3) Lmbench Processor_processes > Processor_Processes case1 case2 case3 > null_call 1.76% 0.40% 0.65% > null_io -0.76% -0.38% -0.23% > stat *-16.09%* *-12.49%* 4.22% > open_close -2.69% 4.51% 3.21% > slct_TCP -0.56% 0.00% -0.44% > sig_inst -1.54% 0.73% 0.70% > sig_hndl -2.85% 0.01% 1.85% > fork_proc *23.31%* 8.77% -5.42% > exec_proc *13.22%* -0.30% 1.09% > sh_proc *14.04%* -0.10% 1.09% > > - case1 is much better than basepage 4K, same as Unixbench test, > case2 is better on fork_proc, but case3 is worse > - note: the variance of fork/exec/sh is bigger than others > > 4) Lmbench Context_switching_ctxsw > Context_switching_ctxsw case1 case2 case3 > 2p/0K -12.16% -5.29% -1.86% > 2p/16K -11.26% -3.71% -4.53% > 2p/64K -2.60% 3.84% -1.98% > 8p/16K -7.56% -1.21% -0.88% > 8p/64K 5.10% 4.88% 1.19% > 16p/16K -5.81% -2.44% -3.84% > 16p/64K 4.29% -1.94% -2.50% > - case1/2/3 worse than basepage 4K and case1 is the worst. > > 4) Lmbench Local_latencies > Local_latencies case1 case2 case3 > Pipe -9.23% 0.58% -4.34% > AF_UNIX -5.34% -1.76% 3.03% > UDP -6.70% -5.96% -9.81% > TCP -7.95% -7.58% -5.63% > TCP_conn -213.99% -227.78% -659.67% > - TCP_conn is very unreliable, ignore it > - case1/2/3 slower than basepage 4K > > 5) Lmbench File_&_VM_latencies > File_&_VM_latencies case1 case2 case3 > 10K_File_Create 2.60% -0.52% 2.66% > 10K_File_Delete -2.91% -5.20% -2.11% > 10K_File_Create 10.23% 1.18% 0.12% > 10K_File_Delete -17.76% -2.97% -1.49% > Mmap_Latency *63.05%* 2.57% -0.96% > Prot_Fault 10.41% -3.21% *-19.11%* > Page_Fault *-132.01%* 2.35% -0.79% > 100fd_selct -1.20% 0.10% 0.31% > - case1 is very good at Mmap_Latency and not good at Page_fault > - case2/3 slower on Prot_Faul/10K_FILE_Delete vs basepage 4k, > the rest doesn't look much different. > > 6) Lmbench Local_bandwidths > Local_bandwidths case1 case2 case3 > Pipe 265.22% 15.44% 11.33% > AF_UNIX 13.41% -2.66% 2.63% > TCP -1.30% 25.90% 2.48% > File_reread 14.79% 31.52% -14.16% > Mmap_reread 27.47% 49.00% -0.11% > Bcopy(libc) 2.58% 2.45% 2.46% > Bcopy(hand) 25.78% 22.56% 22.68% > Mem_read 38.26% 36.80% 36.49% > Mem_write 10.93% 3.44% 3.12% > > - case1 is very good at bandwidth, case2 is better than basepage 4k > but lower than case1, case3 is bad at File_reread > > 7)Lmbench Memory_latencies > Memory_latencies case1 case2 case3 > L1_$ 0.02% 0.00% -0.03% > L2_$ -1.56% -2.65% -1.25% > Main_mem 50.82% 32.51% 33.47% > Rand_mem 15.29% -8.79% -8.80% > > - case1 also good at Main/Rand mem access latencies, > - case2/case3 is better at Main_mem, but worse at Rand_mem. > > Tested-by: Kefeng Wang <wangkefeng.wang@huawei.com> > > > > > > >
On 05/12/2023 17:21, David Hildenbrand wrote: > On 04.12.23 11:20, Ryan Roberts wrote: >> Hi All, >> >> A new week, a new version, a new name... This is v8 of a series to implement >> multi-size THP (mTHP) for anonymous memory (previously called "small-sized THP" >> and "large anonymous folios"). Matthew objected to "small huge" so hopefully >> this fares better. >> >> The objective of this is to improve performance by allocating larger chunks of >> memory during anonymous page faults: >> >> 1) Since SW (the kernel) is dealing with larger chunks of memory than base >> pages, there are efficiency savings to be had; fewer page faults, batched PTE >> and RMAP manipulation, reduced lru list, etc. In short, we reduce kernel >> overhead. This should benefit all architectures. >> 2) Since we are now mapping physically contiguous chunks of memory, we can take >> advantage of HW TLB compression techniques. A reduction in TLB pressure >> speeds up kernel and user space. arm64 systems have 2 mechanisms to coalesce >> TLB entries; "the contiguous bit" (architectural) and HPA (uarch). >> >> This version changes the name and tidies up some of the kernel code and test >> code, based on feedback against v7 (see change log for details). >> >> By default, the existing behaviour (and performance) is maintained. The user >> must explicitly enable multi-size THP to see the performance benefit. This is >> done via a new sysfs interface (as recommended by David Hildenbrand - thanks to >> David for the suggestion)! This interface is inspired by the existing >> per-hugepage-size sysfs interface used by hugetlb, provides full backwards >> compatibility with the existing PMD-size THP interface, and provides a base for >> future extensibility. See [8] for detailed discussion of the interface. >> >> This series is based on mm-unstable (715b67adf4c8). > > I took a look at the core pieces. Some things might want some smaller tweaks, > but nothing that should stop this from having fun in mm-unstable, and replacing > the smaller things as we move forward. > Thanks! I'll address your comments and see if I can post another (final??) version next week.
On 06.12.23 11:13, Ryan Roberts wrote: > On 05/12/2023 17:21, David Hildenbrand wrote: >> On 04.12.23 11:20, Ryan Roberts wrote: >>> Hi All, >>> >>> A new week, a new version, a new name... This is v8 of a series to implement >>> multi-size THP (mTHP) for anonymous memory (previously called "small-sized THP" >>> and "large anonymous folios"). Matthew objected to "small huge" so hopefully >>> this fares better. >>> >>> The objective of this is to improve performance by allocating larger chunks of >>> memory during anonymous page faults: >>> >>> 1) Since SW (the kernel) is dealing with larger chunks of memory than base >>> pages, there are efficiency savings to be had; fewer page faults, batched PTE >>> and RMAP manipulation, reduced lru list, etc. In short, we reduce kernel >>> overhead. This should benefit all architectures. >>> 2) Since we are now mapping physically contiguous chunks of memory, we can take >>> advantage of HW TLB compression techniques. A reduction in TLB pressure >>> speeds up kernel and user space. arm64 systems have 2 mechanisms to coalesce >>> TLB entries; "the contiguous bit" (architectural) and HPA (uarch). >>> >>> This version changes the name and tidies up some of the kernel code and test >>> code, based on feedback against v7 (see change log for details). >>> >>> By default, the existing behaviour (and performance) is maintained. The user >>> must explicitly enable multi-size THP to see the performance benefit. This is >>> done via a new sysfs interface (as recommended by David Hildenbrand - thanks to >>> David for the suggestion)! This interface is inspired by the existing >>> per-hugepage-size sysfs interface used by hugetlb, provides full backwards >>> compatibility with the existing PMD-size THP interface, and provides a base for >>> future extensibility. See [8] for detailed discussion of the interface. >>> >>> This series is based on mm-unstable (715b67adf4c8). >> >> I took a look at the core pieces. Some things might want some smaller tweaks, >> but nothing that should stop this from having fun in mm-unstable, and replacing >> the smaller things as we move forward. >> > > Thanks! I'll address your comments and see if I can post another (final??) > version next week. It's always possible to do incremental changes on top that Andrew will squash in the end. I even recall that he prefers that way once a series has been in mm-unstable for a bit, so one can better observe the diff and which effects they have.
On 06/12/2023 10:22, David Hildenbrand wrote: > On 06.12.23 11:13, Ryan Roberts wrote: >> On 05/12/2023 17:21, David Hildenbrand wrote: >>> On 04.12.23 11:20, Ryan Roberts wrote: >>>> Hi All, >>>> >>>> A new week, a new version, a new name... This is v8 of a series to implement >>>> multi-size THP (mTHP) for anonymous memory (previously called "small-sized THP" >>>> and "large anonymous folios"). Matthew objected to "small huge" so hopefully >>>> this fares better. >>>> >>>> The objective of this is to improve performance by allocating larger chunks of >>>> memory during anonymous page faults: >>>> >>>> 1) Since SW (the kernel) is dealing with larger chunks of memory than base >>>> pages, there are efficiency savings to be had; fewer page faults, >>>> batched PTE >>>> and RMAP manipulation, reduced lru list, etc. In short, we reduce kernel >>>> overhead. This should benefit all architectures. >>>> 2) Since we are now mapping physically contiguous chunks of memory, we can take >>>> advantage of HW TLB compression techniques. A reduction in TLB pressure >>>> speeds up kernel and user space. arm64 systems have 2 mechanisms to >>>> coalesce >>>> TLB entries; "the contiguous bit" (architectural) and HPA (uarch). >>>> >>>> This version changes the name and tidies up some of the kernel code and test >>>> code, based on feedback against v7 (see change log for details). >>>> >>>> By default, the existing behaviour (and performance) is maintained. The user >>>> must explicitly enable multi-size THP to see the performance benefit. This is >>>> done via a new sysfs interface (as recommended by David Hildenbrand - thanks to >>>> David for the suggestion)! This interface is inspired by the existing >>>> per-hugepage-size sysfs interface used by hugetlb, provides full backwards >>>> compatibility with the existing PMD-size THP interface, and provides a base for >>>> future extensibility. See [8] for detailed discussion of the interface. >>>> >>>> This series is based on mm-unstable (715b67adf4c8). >>> >>> I took a look at the core pieces. Some things might want some smaller tweaks, >>> but nothing that should stop this from having fun in mm-unstable, and replacing >>> the smaller things as we move forward. >>> >> >> Thanks! I'll address your comments and see if I can post another (final??) >> version next week. > > It's always possible to do incremental changes on top that Andrew will squash in > the end. I even recall that he prefers that way once a series has been in > mm-unstable for a bit, so one can better observe the diff and which effects they > have. > I've responded to all your comments. There are a bunch of changes that I agree would be good to make (and some which I disagree with - would be good if you get a chance to respond). I think I can get all the changes done and tested by Friday. So perhaps it's simplest to keep this out of mm-unstable until then, and put the new version in on Friday? Then if there are any more small changes to do, I can do those as diffs? Thanks, Ryan
On 2023/12/6 18:08, Ryan Roberts wrote: > On 05/12/2023 14:19, Kefeng Wang wrote: >> >> >> On 2023/12/4 18:20, Ryan Roberts wrote: >>> Hi All, >>> >>> A new week, a new version, a new name... This is v8 of a series to implement >>> multi-size THP (mTHP) for anonymous memory (previously called "small-sized THP" >>> and "large anonymous folios"). Matthew objected to "small huge" so hopefully >>> this fares better. >>> >>> The objective of this is to improve performance by allocating larger chunks of >>> memory during anonymous page faults: >>> >>> 1) Since SW (the kernel) is dealing with larger chunks of memory than base >>> pages, there are efficiency savings to be had; fewer page faults, batched PTE >>> and RMAP manipulation, reduced lru list, etc. In short, we reduce kernel >>> overhead. This should benefit all architectures. >>> 2) Since we are now mapping physically contiguous chunks of memory, we can take >>> advantage of HW TLB compression techniques. A reduction in TLB pressure >>> speeds up kernel and user space. arm64 systems have 2 mechanisms to coalesce >>> TLB entries; "the contiguous bit" (architectural) and HPA (uarch). >>> >>> This version changes the name and tidies up some of the kernel code and test >>> code, based on feedback against v7 (see change log for details). >>> >>> By default, the existing behaviour (and performance) is maintained. The user >>> must explicitly enable multi-size THP to see the performance benefit. This is >>> done via a new sysfs interface (as recommended by David Hildenbrand - thanks to >>> David for the suggestion)! This interface is inspired by the existing >>> per-hugepage-size sysfs interface used by hugetlb, provides full backwards >>> compatibility with the existing PMD-size THP interface, and provides a base for >>> future extensibility. See [8] for detailed discussion of the interface. >>> >>> This series is based on mm-unstable (715b67adf4c8). >>> >>> >>> Prerequisites >>> ============= >>> >>> Some work items identified as being prerequisites are listed on page 3 at [9]. >>> The summary is: >>> >>> | item | status | >>> |:------------------------------|:------------------------| >>> | mlock | In mainline (v6.7) | >>> | madvise | In mainline (v6.6) | >>> | compaction | v1 posted [10] | >>> | numa balancing | Investigated: see below | >>> | user-triggered page migration | In mainline (v6.7) | >>> | khugepaged collapse | In mainline (NOP) | >>> >>> On NUMA balancing, which currently ignores any PTE-mapped THPs it encounters, >>> John Hubbard has investigated this and concluded that it is A) not clear at the >>> moment what a better policy might be for PTE-mapped THP and B) questions whether >>> this should really be considered a prerequisite given no regression is caused >>> for the default "multi-size THP disabled" case, and there is no correctness >>> issue when it is enabled - its just a potential for non-optimal performance. >>> >>> If there are no disagreements about removing numa balancing from the list (none >>> were raised when I first posted this comment against v7), then that just leaves >>> compaction which is in review on list at the moment. >>> >>> I really would like to get this series (and its remaining comapction >>> prerequisite) in for v6.8. I accept that it may be a bit optimistic at this >>> point, but lets see where we get to with review? >>> >>> >>> Testing >>> ======= >>> >>> The series includes patches for mm selftests to enlighten the cow and khugepaged >>> tests to explicitly test with multi-size THP, in the same way that PMD-sized >>> THP is tested. The new tests all pass, and no regressions are observed in the mm >>> selftest suite. I've also run my usual kernel compilation and java script >>> benchmarks without any issues. >>> >>> Refer to my performance numbers posted with v6 [6]. (These are for multi-size >>> THP only - they do not include the arm64 contpte follow-on series). >>> >>> John Hubbard at Nvidia has indicated dramatic 10x performance improvements for >>> some workloads at [11]. (Observed using v6 of this series as well as the arm64 >>> contpte series). >>> >>> Kefeng Wang at Huawei has also indicated he sees improvements at [12] although >>> there are some latency regressions also. >> >> Hi Ryan, >> >> Here is some test results based on v6.7-rc1 + >> [PATCH v7 00/10] Small-sized THP for anonymous memory + >> [PATCH v2 00/14] Transparent Contiguous PTEs for User Mappings >> >> case1: basepage 64K >> case2: basepage 4K + thp=64k + PAGE_ALLOC_COSTLY_ORDER = 3 >> case3: basepage 4K + thp=64k + PAGE_ALLOC_COSTLY_ORDER = 4 > > Thanks for sharing these results. With the exception of a few outliers, It looks > like the ~rough conclusion is that bandwidth improves, but not as much as 64K > base pages, and latency regresses, but also not as much as 64K base pages? It depends on the test cases, both sides have their own advantages and disadvantages, but 64k base page is still better in most cases. > > I expect that over time, as we add more optimizations, we will get bandwidth > closer to 64K base pages; one crucial one is getting executable file-backed > memory into contpte mappings, for example. Yes, this should spend some time to optimize, also maybe provide more policy, eg order chosen, per-task/per-cg control? > > It's probably not time to switch PAGE_ALLOC_COSTLY_ORDER quite yet; but > something to keep an eye on and consider down the road? This one just for test and it seems not to obtain large gain in unixbench/lmbench testcases, also it shouldn't be considered in this patchset.