From patchwork Wed Apr 3 11:40:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13616014 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30DB3CD1294 for ; Wed, 3 Apr 2024 11:41:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6283C6B009A; Wed, 3 Apr 2024 07:40:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 58C996B009B; Wed, 3 Apr 2024 07:40:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 367CF6B009C; Wed, 3 Apr 2024 07:40:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 0EFB96B009A for ; Wed, 3 Apr 2024 07:40:58 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id D255040EB4 for ; Wed, 3 Apr 2024 11:40:57 +0000 (UTC) X-FDA: 81968028954.06.F82E6E2 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf27.hostedemail.com (Postfix) with ESMTP id 24E9540008 for ; Wed, 3 Apr 2024 11:40:55 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf27.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712144456; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=r7vbfKNv84Y5KmKyJsl+HlsOAdRrJ9C/apvk8I7eM+A=; b=IYM17LaDBpIvE2PTWshc36ZVSF/kIrXZCpQVA5xbwE/RIwZIrWz4iEClHlTFBmGctZeLsB 2heFC2GvCOiVzAMWPbEBLUXPPEwiwCtpaSN7jShVlBq5V1XyGTVjZlUrW57QwPSGPiAljg uDDRmfMiYWICzklPy1Jm3LLl5i6EBDU= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf27.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712144456; a=rsa-sha256; cv=none; b=CNCacb3LJNuSVbD8QBetz+zuG/WP1y4qDLLtGtyBny7BDzkPjGRc+8z86+WAFEVWkKj8VX EXJeWcOoPDHUwct693F9BW6aw2Kud6qPl4Xr5/Dx2V0IL7gHN/v31/S76foSYa8ap5VfqN bWxWorvZihPWUIynOFLKZ7d977/uppQ= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 93A691650; Wed, 3 Apr 2024 04:41:26 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 860063F64C; Wed, 3 Apr 2024 04:40:53 -0700 (PDT) From: Ryan Roberts To: Andrew Morton , David Hildenbrand , Matthew Wilcox , Huang Ying , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko , Kefeng Wang , Barry Song <21cnbao@gmail.com>, Chris Li , Lance Yang Cc: Ryan Roberts , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Barry Song Subject: [PATCH v6 6/6] mm: madvise: Avoid split during MADV_PAGEOUT and MADV_COLD Date: Wed, 3 Apr 2024 12:40:32 +0100 Message-Id: <20240403114032.1162100-7-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240403114032.1162100-1-ryan.roberts@arm.com> References: <20240403114032.1162100-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 24E9540008 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: ton5sqxizm3nztgxhg7qz1jb7tphysmq X-HE-Tag: 1712144455-997631 X-HE-Meta: U2FsdGVkX19pZWXX8bG5P5AiHjkvnaHLblud50UpKE0F73Jt5rTeD2lCmxFIv80HDd594pNpv8uxwD4zxihoCptq2xmY8ui8CVu8xhzvrIozY40CQ5GnfTbdyfrdZm4l7rQzYxdkKaV7CpQ3kk/zVGa7sUeI6FjqMQVIAmUuutM4S2Kyd88OdATvzMZLsZzyyZiOblKr62GucFhYH7gJ+4ZAKseYSzhFtWhO5KctwbMDcg1vl23RZ/cSXrVx1OXAJNv4lhGHpPSixTnDvOjtFt9IBQ5Rvhfs4ukYxO7hsJ2ms3WYJUSOwrzg+hKh1K2hnYbIJkq1QW8rJtYTXnQ1SApM5hJvFgRjhOSsZYO38aNRJEKbZdoO52Wu+JXMr4IHQFM2E62akYOwkiQWcwv+d1nOoHdUJdhJxUEtXTOWAk5Okwf9dlwcrvMJxtCcdr4zXwzHlG9orJrn4WZ8xESfio1/D8gtmI7fapRXU4VYeAd6bPJbb19ldN+kMsLHFDnBdPkFGe5r5VElweMvWt11lbo2erAE5cjquzgDxE22x5MQGjV3ejutAPZXa8+RNjFhbAotHy9OV3O2xD46NW/0bmhyICrHsKMYlngq4Ii6deupFCwyQ++JRyqTKe3ldTyDSAkV6GuaKrD4vyCloLBQmCZ5sxgSdbZs+O5OwAmXx86pRCtdQlaoOfNWE14RtTvR/5D2daQru75+NmQ4Vb+Mti+TWPXF8HyDNr6ZMHxJIHd6H6HkcnWGZfVSDStKh2OVVX1A5v9V0BrOEd7xN51xi5a23DwdsQm33QoJKNhomaahUXV1r9EeQoQgWiJRuAxGAk3N48acH6WySB+su6nCc11OBSTB5cCnkw+cuHnodo9/Zcl8Vl54+770ITomBCViN06Ws7opi5WWYZ2ycZVEXZY2vfOLN8vIzOTgeDkUisjC2SYSuWyM8rBjdNdqE4loil7UhFkPubXyIDyNmPA VfHGXPMJ duR7rDvXVsftq0xkF1XRyXsBzeTAlBSELnu1fPrLeZLkphuYO0LEmdF07Sd5HMNMU5yCBCquSDOGNU2JzVx6/HzU3LsNnXtKE4u95reUHYJFMpc21BEJPsu7pqLT42x+0cVD5zEeNMbmqyPb1di7OwqUEyUDqmGY5Q8VugY3hohVHsjc0WdiUtJa4cZ7gN+mu1UTbu4lS803im+WUWfTbZHx6mQ9zx/2c/tABko4vD/AiTgDb+PK3S1JRMod7t/tdJQZQtM99PkO6ek8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Rework madvise_cold_or_pageout_pte_range() to avoid splitting any large folio that is fully and contiguously mapped in the pageout/cold vm range. This change means that large folios will be maintained all the way to swap storage. This both improves performance during swap-out, by eliding the cost of splitting the folio, and sets us up nicely for maintaining the large folio when it is swapped back in (to be covered in a separate series). Folios that are not fully mapped in the target range are still split, but note that behavior is changed so that if the split fails for any reason (folio locked, shared, etc) we now leave it as is and move to the next pte in the range and continue work on the proceeding folios. Previously any failure of this sort would cause the entire operation to give up and no folios mapped at higher addresses were paged out or made cold. Given large folios are becoming more common, this old behavior would have likely lead to wasted opportunities. While we are at it, change the code that clears young from the ptes to use ptep_test_and_clear_young(), via the new mkold_ptes() batch helper function. This is more efficent than get_and_clear/modify/set, especially for contpte mappings on arm64, where the old approach would require unfolding/refolding and the new approach can be done in place. Reviewed-by: Barry Song Signed-off-by: Ryan Roberts --- include/linux/pgtable.h | 30 ++++++++++++++ mm/internal.h | 12 +++++- mm/madvise.c | 88 ++++++++++++++++++++++++----------------- mm/memory.c | 4 +- 4 files changed, 93 insertions(+), 41 deletions(-) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 0278259f7078..0f4b2faa1d71 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -361,6 +361,36 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, } #endif +#ifndef mkold_ptes +/** + * mkold_ptes - Mark PTEs that map consecutive pages of the same folio as old. + * @vma: VMA the pages are mapped into. + * @addr: Address the first page is mapped at. + * @ptep: Page table pointer for the first entry. + * @nr: Number of entries to mark old. + * + * May be overridden by the architecture; otherwise, implemented as a simple + * loop over ptep_test_and_clear_young(). + * + * Note that PTE bits in the PTE range besides the PFN can differ. For example, + * some PTEs might be write-protected. + * + * Context: The caller holds the page table lock. The PTEs map consecutive + * pages that belong to the same folio. The PTEs are all in the same PMD. + */ +static inline void mkold_ptes(struct vm_area_struct *vma, unsigned long addr, + pte_t *ptep, unsigned int nr) +{ + for (;;) { + ptep_test_and_clear_young(vma, addr, ptep); + if (--nr == 0) + break; + ptep++; + addr += PAGE_SIZE; + } +} +#endif + #ifndef __HAVE_ARCH_PMDP_TEST_AND_CLEAR_YOUNG #if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG) static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma, diff --git a/mm/internal.h b/mm/internal.h index 88705ab4c50a..003bc189736b 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -130,6 +130,8 @@ static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags) * @flags: Flags to modify the PTE batch semantics. * @any_writable: Optional pointer to indicate whether any entry except the * first one is writable. + * @any_young: Optional pointer to indicate whether any entry except the + * first one is young. * * Detect a PTE batch: consecutive (present) PTEs that map consecutive * pages of the same large folio. @@ -145,16 +147,18 @@ static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags) */ static inline int folio_pte_batch(struct folio *folio, unsigned long addr, pte_t *start_ptep, pte_t pte, int max_nr, fpb_t flags, - bool *any_writable) + bool *any_writable, bool *any_young) { unsigned long folio_end_pfn = folio_pfn(folio) + folio_nr_pages(folio); const pte_t *end_ptep = start_ptep + max_nr; pte_t expected_pte, *ptep; - bool writable; + bool writable, young; int nr; if (any_writable) *any_writable = false; + if (any_young) + *any_young = false; VM_WARN_ON_FOLIO(!pte_present(pte), folio); VM_WARN_ON_FOLIO(!folio_test_large(folio) || max_nr < 1, folio); @@ -168,6 +172,8 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr, pte = ptep_get(ptep); if (any_writable) writable = !!pte_write(pte); + if (any_young) + young = !!pte_young(pte); pte = __pte_batch_clear_ignored(pte, flags); if (!pte_same(pte, expected_pte)) @@ -183,6 +189,8 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr, if (any_writable) *any_writable |= writable; + if (any_young) + *any_young |= young; nr = pte_batch_hint(ptep, pte); expected_pte = pte_advance_pfn(expected_pte, nr); diff --git a/mm/madvise.c b/mm/madvise.c index 070bedb4996e..c1377f3b5ca1 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -336,6 +336,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, LIST_HEAD(folio_list); bool pageout_anon_only_filter; unsigned int batch_count = 0; + int nr; if (fatal_signal_pending(current)) return -EINTR; @@ -423,7 +424,8 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, return 0; flush_tlb_batched_pending(mm); arch_enter_lazy_mmu_mode(); - for (; addr < end; pte++, addr += PAGE_SIZE) { + for (; addr < end; pte += nr, addr += nr * PAGE_SIZE) { + nr = 1; ptent = ptep_get(pte); if (++batch_count == SWAP_CLUSTER_MAX) { @@ -447,55 +449,67 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, continue; /* - * Creating a THP page is expensive so split it only if we - * are sure it's worth. Split it if we are only owner. + * If we encounter a large folio, only split it if it is not + * fully mapped within the range we are operating on. Otherwise + * leave it as is so that it can be swapped out whole. If we + * fail to split a folio, leave it in place and advance to the + * next pte in the range. */ if (folio_test_large(folio)) { - int err; - - if (folio_likely_mapped_shared(folio)) - break; - if (pageout_anon_only_filter && !folio_test_anon(folio)) - break; - if (!folio_trylock(folio)) - break; - folio_get(folio); - arch_leave_lazy_mmu_mode(); - pte_unmap_unlock(start_pte, ptl); - start_pte = NULL; - err = split_folio(folio); - folio_unlock(folio); - folio_put(folio); - if (err) - break; - start_pte = pte = - pte_offset_map_lock(mm, pmd, addr, &ptl); - if (!start_pte) - break; - arch_enter_lazy_mmu_mode(); - pte--; - addr -= PAGE_SIZE; - continue; + const fpb_t fpb_flags = FPB_IGNORE_DIRTY | + FPB_IGNORE_SOFT_DIRTY; + int max_nr = (end - addr) / PAGE_SIZE; + bool any_young; + + nr = folio_pte_batch(folio, addr, pte, ptent, max_nr, + fpb_flags, NULL, &any_young); + if (any_young) + ptent = pte_mkyoung(ptent); + + if (nr < folio_nr_pages(folio)) { + int err; + + if (folio_likely_mapped_shared(folio)) + continue; + if (pageout_anon_only_filter && !folio_test_anon(folio)) + continue; + if (!folio_trylock(folio)) + continue; + folio_get(folio); + arch_leave_lazy_mmu_mode(); + pte_unmap_unlock(start_pte, ptl); + start_pte = NULL; + err = split_folio(folio); + folio_unlock(folio); + folio_put(folio); + start_pte = pte = + pte_offset_map_lock(mm, pmd, addr, &ptl); + if (!start_pte) + break; + if (err) + continue; + arch_enter_lazy_mmu_mode(); + nr = 0; + continue; + } } /* * Do not interfere with other mappings of this folio and - * non-LRU folio. + * non-LRU folio. If we have a large folio at this point, we + * know it is fully mapped so if its mapcount is the same as its + * number of pages, it must be exclusive. */ - if (!folio_test_lru(folio) || folio_mapcount(folio) != 1) + if (!folio_test_lru(folio) || + folio_mapcount(folio) != folio_nr_pages(folio)) continue; if (pageout_anon_only_filter && !folio_test_anon(folio)) continue; - VM_BUG_ON_FOLIO(folio_test_large(folio), folio); - if (!pageout && pte_young(ptent)) { - ptent = ptep_get_and_clear_full(mm, addr, pte, - tlb->fullmm); - ptent = pte_mkold(ptent); - set_pte_at(mm, addr, pte, ptent); - tlb_remove_tlb_entry(tlb, pte, addr); + mkold_ptes(vma, addr, pte, nr); + tlb_remove_tlb_entries(tlb, pte, nr, addr); } /* diff --git a/mm/memory.c b/mm/memory.c index ef2968894718..912cd738ec03 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -989,7 +989,7 @@ copy_present_ptes(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma flags |= FPB_IGNORE_SOFT_DIRTY; nr = folio_pte_batch(folio, addr, src_pte, pte, max_nr, flags, - &any_writable); + &any_writable, NULL); folio_ref_add(folio, nr); if (folio_test_anon(folio)) { if (unlikely(folio_try_dup_anon_rmap_ptes(folio, page, @@ -1559,7 +1559,7 @@ static inline int zap_present_ptes(struct mmu_gather *tlb, */ if (unlikely(folio_test_large(folio) && max_nr != 1)) { nr = folio_pte_batch(folio, addr, pte, ptent, max_nr, fpb_flags, - NULL); + NULL, NULL); zap_present_folio_ptes(tlb, vma, folio, page, pte, ptent, nr, addr, details, rss, force_flush,