Message ID | 20230727141837.3386072-1-ryan.roberts@arm.com (mailing list archive) |
---|---|
Headers | show |
Series | Optimize large folio interaction with deferred split | expand |
Hi Andrew, After discussion about this in Matthew's THP Cabal, we have decided to take a different approach with this patch set. Could you therefore remove it from mm-unstable, please? Sorry about the noise. I'm going to try 2 different approaches: - avoid the split lock contention by using mmu gather (suggested by Kirill) - expand the zap pte batching to also cover file folios (as requested by Yu). Thanks, Ryan On 27/07/2023 15:18, Ryan Roberts wrote: > Hi All, > > This is v4 of a small series in support of my work to enable the use of large > folios for anonymous memory (known as "FLEXIBLE_THP" or "LARGE_ANON_FOLIO") [5]. > It first makes it possible to add large, non-pmd-mappable folios to the deferred > split queue. Then it modifies zap_pte_range() to batch-remove spans of > physically contiguous pages from the rmap, which means that in the common case, > we elide the need to ever put the folio on the deferred split queue, thus > reducing lock contention and improving performance. > > This becomes more visible once we have lots of large anonymous folios in the > system, and Huang Ying has suggested solving this needs to be a prerequisit for > merging the main FLEXIBLE_THP/LARGE_ANON_FOLIO work. > > The series applies on top of v6.5-rc3 and a branch is available at [4]. > > NOTE: v3 is currently in mm-unstable and has a bug that affects s390, which this > version fixes. > > > Changes since v3 [3] > -------------------- > > - Fixed bug reported on s390 [6] > - Since s390 enables MMU_GATHER_NO_GATHER, __tlb_remove_page() causes a ref > to be dropped on the page, but we were still using the page after that > function call. > - Fix by using folio_get()/folio_put() to guarrantee lifetime of page > - Thanks to Nathan Chancellor for the bug report and helping me get set up > with s390! > - Don't use batch path if folio is not large > > > Changes since v2 [2] > -------------------- > > - patch 2: Reworked at Yu Zhou's request to reduce duplicated code. > - page_remove_rmap() now forwards to folio_remove_rmap_range() for the > !compound (PMD mapped) case. > - Both page_remove_rmap() and folio_remove_rmap_range() share common > epilogue via new helper function __remove_rmap_finish(). > - As a result of the changes, I've removed the previous Reviewed-bys. > - other 2 patches are unchanged. > > > Changes since v1 [1] > -------------------- > > - patch 2: Modified doc comment for folio_remove_rmap_range() > - patch 2: Hoisted _nr_pages_mapped manipulation out of page loop so its now > modified once per folio_remove_rmap_range() call. > - patch 2: Added check that page range is fully contained by folio in > folio_remove_rmap_range() > - patch 2: Fixed some nits raised by Huang, Ying for folio_remove_rmap_range() > - patch 3: Support batch-zap of all anon pages, not just those in anon vmas > - patch 3: Renamed various functions to make their use clear > - patch 3: Various minor refactoring/cleanups > - Added Reviewed-By tags - thanks! > > > [1] https://lore.kernel.org/linux-mm/20230717143110.260162-1-ryan.roberts@arm.com/ > [2] https://lore.kernel.org/linux-mm/20230719135450.545227-1-ryan.roberts@arm.com/ > [3] https://lore.kernel.org/linux-mm/20230720112955.643283-1-ryan.roberts@arm.com/ > [4] https://gitlab.arm.com/linux-arm/linux-rr/-/tree/features/granule_perf/deferredsplit-lkml_v4 > [5] https://lore.kernel.org/linux-mm/20230714160407.4142030-1-ryan.roberts@arm.com/ > [6] https://lore.kernel.org/linux-mm/20230726161942.GA1123863@dev-arch.thelio-3990X/ > > Thanks, > Ryan > > > Ryan Roberts (3): > mm: Allow deferred splitting of arbitrary large anon folios > mm: Implement folio_remove_rmap_range() > mm: Batch-zap large anonymous folio PTE mappings > > include/linux/rmap.h | 2 + > mm/memory.c | 132 +++++++++++++++++++++++++++++++++++++++++++ > mm/rmap.c | 125 ++++++++++++++++++++++++++++++---------- > 3 files changed, 229 insertions(+), 30 deletions(-) > > -- > 2.25.1 >
On Wed, Aug 2, 2023 at 10:42 AM Ryan Roberts <ryan.roberts@arm.com> wrote: > > Hi Andrew, > > After discussion about this in Matthew's THP Cabal, we have decided to take a > different approach with this patch set. Could you therefore remove it from > mm-unstable, please? Sorry about the noise. > > I'm going to try 2 different approaches: > - avoid the split lock contention by using mmu gather (suggested by Kirill) > - expand the zap pte batching to also cover file folios (as requested by Yu). Also we didn't have the chance to clarify before Ryan dropped out from the meeting: I don't think this series is a prerequisite for the other series ("variable-order, large folios for anonymous memory") at all. They can move along in parallel: one is specific for anon and the other (this series) is generic for all types of large folios.
On Wed, Aug 02, 2023 at 05:42:23PM +0100, Ryan Roberts wrote:
> - avoid the split lock contention by using mmu gather (suggested by Kirill)
[Offlist]
So, my idea is to embed struct deferred_split into struct mmu_gather and
make zap path to use it instead of per-node/per-memcg deferred_split. This
would avoid lock contention. If the list is not empty after zap, move the
to the per-node/per-memcg deferred_split.
But it is only relevant if we see lock contention.
On 03/08/2023 13:01, Kirill A. Shutemov wrote: > On Wed, Aug 02, 2023 at 05:42:23PM +0100, Ryan Roberts wrote: >> - avoid the split lock contention by using mmu gather (suggested by Kirill) > > [Offlist] > > So, my idea is to embed struct deferred_split into struct mmu_gather and > make zap path to use it instead of per-node/per-memcg deferred_split. This > would avoid lock contention. If the list is not empty after zap, move the > to the per-node/per-memcg deferred_split. > > But it is only relevant if we see lock contention. > Thanks Kiryl, I understand the proposal now. Having thought about this over night, I'm thinking I'll just implement the full batch approach that Yu proposed. In this case, we will get the benefits of batching rmap removal (for all folio types) and as a side benefit we will get the lock contention reduction (if there is lock contention) without the need for the new per-mmu_gather struct deferred_split. Shout if you have issue with this.