From patchwork Thu Oct 31 08:13:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13857708 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AEA38E68958 for ; Thu, 31 Oct 2024 08:14:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 41D476B009D; Thu, 31 Oct 2024 04:14:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3CAD36B009F; Thu, 31 Oct 2024 04:14:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 245A16B00A0; Thu, 31 Oct 2024 04:14:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 058066B009D for ; Thu, 31 Oct 2024 04:14:41 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 8EA74ACC2F for ; Thu, 31 Oct 2024 08:14:41 +0000 (UTC) X-FDA: 82733185416.16.8C04764 Received: from mail-pj1-f54.google.com (mail-pj1-f54.google.com [209.85.216.54]) by imf14.hostedemail.com (Postfix) with ESMTP id E49C6100014 for ; Thu, 31 Oct 2024 08:14:09 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=FkNbDsp+; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf14.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.216.54 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730362318; a=rsa-sha256; cv=none; b=sCdwePPAP05dBrLcZRiVbBzS/xUVnUNmdoYw+mGrdrWBFzSowKOzcxm1hcmODzARFx27p0 x3Nj7enAepxD3sJWWoQybdBupQcMTjDRhdj4eh87G7iIdDcbS4y/UtnYAkdN9FKR5BQIW9 ldeq5gDyyelODml0sM7n01ws6Cn9/tM= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=FkNbDsp+; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf14.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.216.54 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730362318; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/4RftZhl83d9xOlPt19CbxNfudxdUAgo7jhJTmA1K/8=; b=WwY6xWCaJWHYcEbp0rHG6fpQZHXRRmta+Sh8K+Pvwn4ThKEiwF5+z/3/KgDaYBNVG0/F13 1XXP5fzhibIwbFdkM2X07egYQQBybRlq0yF+H+DpoHiESt7vFKffy8hjJ00upISxiCjI1y 2ViYXDuPgUbzBGOkLHTKPCj2pG/YduI= Received: by mail-pj1-f54.google.com with SMTP id 98e67ed59e1d1-2e2bd0e2c4fso565520a91.3 for ; Thu, 31 Oct 2024 01:14:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1730362478; x=1730967278; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=/4RftZhl83d9xOlPt19CbxNfudxdUAgo7jhJTmA1K/8=; b=FkNbDsp+4pKV8qJU9VK/DWJmtXxSBxdy6NZ1OV+sAN3TpDaLJlYBBON2fUOfN5GoQz LoPF2yNeJ3FMwdA6dP1WtePsYtktopJw4nVmFRQL1Zy2UWdbEB5MjClI8dL/kaXqZzor SlQbbeCw4ncxL7zToTKuwKtZZGAh3Vb9LWk2AtOZQ3uJb2qz3quicWCR+n0wkmQoE8sv hnjWudjTMehgztkxkpJ3jijHp3lkQXhVSOlI5LXOSV3iX0CbXTrmUNvJLPrchnsEkjB8 Dx/Y9YgTBNybApnStXjPg7bBgZFRZJX1vyeBxlTLu+R4F+P20E9ZdkM4yMLVTjJAiQRQ vDfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730362478; x=1730967278; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/4RftZhl83d9xOlPt19CbxNfudxdUAgo7jhJTmA1K/8=; b=rWJ8OYON8YmiaUzgfHf55A1Pnyg2byLX9X+PDMTw+Ed2vVTPht2ET7Uw4y4IYWu8T6 WwXyP47jIG92qtg11Rwv5WeoRALX2DE2Q2Jps5DdAbbvEIOTgJPRSYSfqZk0nfI8vxgu Dz8+gSAgGooP79l0WwJiaWxzo2v+K/c3eIf83fVJMUQrrv7ozgR8vBSf5RO03/U2svYu 8LeCLlQ05b14hgCQLGusIbpqfHpEwIDuTz6YqRZCb4L/BrdcLwOFP51MgnhxEZzKWQM+ wUn9vJzOCxQzrO9Afg9K7D7QhBAw1VDZdl/P+kVU2go1SHWCFVEYFs0XPmPz94/KeQmB 5HTw== X-Gm-Message-State: AOJu0YxOCAKsqtPGDZqBRacpF2+phi1dCFiTZabfIb3sAk4kbwxw4kQJ rfeBAsbNj/6porj97vcf/i2NBGXn7zmzVhOfJlJ5ZmayHPgwo8EXfNHMJLPXOZs= X-Google-Smtp-Source: AGHT+IG9/W35C0GmYjhqi67a7cjSq3d8MXDjUDLzpnbNphLEBFgOx4BfbHz6qsffBssbygHke8ycIQ== X-Received: by 2002:a17:90b:1fc3:b0:2e2:a661:596a with SMTP id 98e67ed59e1d1-2e93c1860femr3011162a91.13.1730362478602; Thu, 31 Oct 2024 01:14:38 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.149]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2e9201b2586sm5163996a91.0.2024.10.31.01.14.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 31 Oct 2024 01:14:38 -0700 (PDT) From: Qi Zheng To: david@redhat.com, jannh@google.com, hughd@google.com, willy@infradead.org, mgorman@suse.de, muchun.song@linux.dev, vbabka@kernel.org, akpm@linux-foundation.org, zokeefe@google.com, rientjes@google.com, peterx@redhat.com, catalin.marinas@arm.com Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, x86@kernel.org, Qi Zheng Subject: [PATCH v2 5/7] mm: pgtable: try to reclaim empty PTE page in madvise(MADV_DONTNEED) Date: Thu, 31 Oct 2024 16:13:21 +0800 Message-Id: X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: MIME-Version: 1.0 X-Stat-Signature: sgjz4tprnwg3m5qmbcdbknwzusfx9r74 X-Rspamd-Queue-Id: E49C6100014 X-Rspamd-Server: rspam08 X-Rspam-User: X-HE-Tag: 1730362449-603818 X-HE-Meta: U2FsdGVkX1+yZqlqyIMpgUECG2b0gGs55f7IW37KEy838N8q7jcSIN60CNxa3T/JdMEj0W7yVaZCT0vp+1Fvh6ZvKNDwuZTMGWrDBEZYicog0XFRzlAie56ha/npV1PeL3DU/4co4kE+sSXxnTYQw57tfSOE8jaNYIQfXmRAmvAJT2qkwFNDMppFQCauEWdue+ce8PRIS2SOdy96Ws19iyqIV1kk55fGAmbK+dnxTKWOT2tPiRoSCwwXjnhdVi4VGNFEHgLHiPlA3T3Cmt9WpYGvD/iEyVawV2MPL58HgAiayo85VVM6XMGCUv/II3kwF3ZdC7LZPS0XX60iApls1SMxEQ8dLoPyV4kDgynVGc6v4WfFy0NGWjcG4O67InARoi5toqnJGiWDY0cKbRd6nwnWQS1ROietyPd4+eZ5Y9UrquMjlELOVhcMCDC97vYUgyLrzXQAu3lMLLuqDvWPdKXmCE0UgOOfKs2UmgUQLeTWJd/4P0+joW3DwgDlkSFXobCugCqago833Likalavsp8LBapxT0P14+CnnBnJuhNiIc/On91F04l95YGDsejvLbqCvbGtLO2HLN/Vi6FXXJLVjxQKdrD6+5p2e6pwdJKO5Sc3GiT7Ik8n219Ty70VfrokQmUwrrAJ95r6HVIKcMXBzDYn2wOC0J0TTH4UmfdFtMveiuTjGxPM0TuUIdsxVvr0xbHtwicOMoKMgNKV935sqq9iP4d4blIcodTsCBdy9zQdFc9JqDPlNuJErIBQZ3vplOiXiDnbXW0iajY4Mq/Wf1F1qPqe5GUPBqF40j/8C7mcwfung2djjmsQBnp23JoQGhCMF63ptzH7V9gNzK8ev74EKGPvkQq+fh8By0Rtw/SCcFiGswoDNAp2NleB0rJYMKgcxz/Hc3r2lCbQb/dlneKDHY316IBUstGM6g/CYgH7E9zIsavHUf69qzmo6pm/y40k7BmWesH3Jig rDy5bMmK G8iN5m2dsQwyLzMsBvRtCkQW35VSH8+lONKdkG7Tx5DK1mzfbxF1U/84JxEHISBUmS3OTGf2GHuYjAnti3jmBYBGg454Yt36U5D/jZIgnmmt9+KHaYbBbCX5RXYmjHANYiCEGZhanKzzE/bEFPoludgm1E9KS+EX6ShhScpzDGTqbIyrFUGkB6/vs837VyIIcy9pksrgI2XEC5S4J/NmlfUo7j+bKAr3QTP8JBCHUB2ezOJAwvhzBt+mHbmOQGhyooGAjDBr1xb6FqRK5k9QlV2DnlSFRgweSDJU+JZNP0r/Z1ZMw95pcCHm946VvWJ7aR+KpJ/l7Rl62+m9ntK+tkW9jVgW3MIzoad/SioXNP74JcmU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Now in order to pursue high performance, applications mostly use some high-performance user-mode memory allocators, such as jemalloc or tcmalloc. These memory allocators use madvise(MADV_DONTNEED or MADV_FREE) to release physical memory, but neither MADV_DONTNEED nor MADV_FREE will release page table memory, which may cause huge page table memory usage. The following are a memory usage snapshot of one process which actually happened on our server: VIRT: 55t RES: 590g VmPTE: 110g In this case, most of the page table entries are empty. For such a PTE page where all entries are empty, we can actually free it back to the system for others to use. As a first step, this commit aims to synchronously free the empty PTE pages in madvise(MADV_DONTNEED) case. We will detect and free empty PTE pages in zap_pte_range(), and will add zap_details.reclaim_pt to exclude cases other than madvise(MADV_DONTNEED). Once an empty PTE is detected, we first try to hold the pmd lock within the pte lock. If successful, we clear the pmd entry directly (fast path). Otherwise, we wait until the pte lock is released, then re-hold the pmd and pte locks and loop PTRS_PER_PTE times to check pte_none() to re-detect whether the PTE page is empty and free it (slow path). For other cases such as madvise(MADV_FREE), consider scanning and freeing empty PTE pages asynchronously in the future. The following code snippet can show the effect of optimization: mmap 50G while (1) { for (; i < 1024 * 25; i++) { touch 2M memory madvise MADV_DONTNEED 2M } } As we can see, the memory usage of VmPTE is reduced: before after VIRT 50.0 GB 50.0 GB RES 3.1 MB 3.1 MB VmPTE 102640 KB 240 KB Signed-off-by: Qi Zheng --- include/linux/mm.h | 1 + mm/Kconfig | 15 ++++++++++ mm/Makefile | 1 + mm/internal.h | 23 ++++++++++++++++ mm/madvise.c | 4 ++- mm/memory.c | 45 +++++++++++++++++++++++++++++- mm/pt_reclaim.c | 68 ++++++++++++++++++++++++++++++++++++++++++++++ 7 files changed, 155 insertions(+), 2 deletions(-) create mode 100644 mm/pt_reclaim.c diff --git a/include/linux/mm.h b/include/linux/mm.h index 3e4bb43035953..ce3936590fe72 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2319,6 +2319,7 @@ extern void pagefault_out_of_memory(void); struct zap_details { struct folio *single_folio; /* Locked folio to be unmapped */ bool even_cows; /* Zap COWed private pages too? */ + bool reclaim_pt; /* Need reclaim page tables? */ zap_flags_t zap_flags; /* Extra flags for zapping */ }; diff --git a/mm/Kconfig b/mm/Kconfig index 84000b0168086..681909e0a9fa3 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1301,6 +1301,21 @@ config ARCH_HAS_USER_SHADOW_STACK The architecture has hardware support for userspace shadow call stacks (eg, x86 CET, arm64 GCS or RISC-V Zicfiss). +config ARCH_SUPPORTS_PT_RECLAIM + def_bool n + +config PT_RECLAIM + bool "reclaim empty user page table pages" + default y + depends on ARCH_SUPPORTS_PT_RECLAIM && MMU && SMP + select MMU_GATHER_RCU_TABLE_FREE + help + Try to reclaim empty user page table pages in paths other that munmap + and exit_mmap path. + + Note: now only empty user PTE page table pages will be reclaimed. + + source "mm/damon/Kconfig" endmenu diff --git a/mm/Makefile b/mm/Makefile index d5639b0361663..9d816323d247a 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -145,3 +145,4 @@ obj-$(CONFIG_GENERIC_IOREMAP) += ioremap.o obj-$(CONFIG_SHRINKER_DEBUG) += shrinker_debug.o obj-$(CONFIG_EXECMEM) += execmem.o obj-$(CONFIG_TMPFS_QUOTA) += shmem_quota.o +obj-$(CONFIG_PT_RECLAIM) += pt_reclaim.o diff --git a/mm/internal.h b/mm/internal.h index d5b93c5b63648..7aba395a9940f 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1508,4 +1508,27 @@ int walk_page_range_mm(struct mm_struct *mm, unsigned long start, unsigned long end, const struct mm_walk_ops *ops, void *private); +#ifdef CONFIG_PT_RECLAIM +bool try_get_and_clear_pmd(struct mm_struct *mm, pmd_t *pmd, pmd_t *pmdval); +void free_pte(struct mm_struct *mm, unsigned long addr, struct mmu_gather *tlb, + pmd_t pmdval); +void try_to_free_pte(struct mm_struct *mm, pmd_t *pmd, unsigned long addr, + struct mmu_gather *tlb); +#else +static inline bool try_get_and_clear_pmd(struct mm_struct *mm, pmd_t *pmd, + pmd_t *pmdval) +{ + return false; +} +static inline void free_pte(struct mm_struct *mm, unsigned long addr, + struct mmu_gather *tlb, pmd_t pmdval) +{ +} +static inline void try_to_free_pte(struct mm_struct *mm, pmd_t *pmd, + unsigned long addr, struct mmu_gather *tlb) +{ +} +#endif /* CONFIG_PT_RECLAIM */ + + #endif /* __MM_INTERNAL_H */ diff --git a/mm/madvise.c b/mm/madvise.c index 0ceae57da7dad..ee88652761d45 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -851,7 +851,9 @@ static int madvise_free_single_vma(struct vm_area_struct *vma, static long madvise_dontneed_single_vma(struct vm_area_struct *vma, unsigned long start, unsigned long end) { - zap_page_range_single(vma, start, end - start, NULL); + struct zap_details details = {.reclaim_pt = true,}; + + zap_page_range_single(vma, start, end - start, &details); return 0; } diff --git a/mm/memory.c b/mm/memory.c index 002aa4f454fa0..c4a8c18fbcfd7 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1436,7 +1436,7 @@ copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma) static inline bool should_zap_cows(struct zap_details *details) { /* By default, zap all pages */ - if (!details) + if (!details || details->reclaim_pt) return true; /* Or, we zap COWed pages only if the caller wants to */ @@ -1678,6 +1678,30 @@ static inline int do_zap_pte_range(struct mmu_gather *tlb, details, rss); } +static inline int count_pte_none(pte_t *pte, int nr) +{ + int none_nr = 0; + + /* + * If PTE_MARKER_UFFD_WP is enabled, the uffd-wp PTEs may be + * re-installed, so we need to check pte_none() one by one. + * Otherwise, checking a single PTE in a batch is sufficient. + */ +#ifdef CONFIG_PTE_MARKER_UFFD_WP + for (;;) { + if (pte_none(ptep_get(pte))) + none_nr++; + if (--nr == 0) + break; + pte++; + } +#else + if (pte_none(ptep_get(pte))) + none_nr = nr; +#endif + return none_nr; +} + static unsigned long zap_pte_range(struct mmu_gather *tlb, struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr, unsigned long end, @@ -1689,8 +1713,16 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, spinlock_t *ptl; pte_t *start_pte; pte_t *pte; + pmd_t pmdval; + bool can_reclaim_pt = false; + bool direct_reclaim = false; + unsigned long start = addr; + int none_nr = 0; int nr; + if (details && details->reclaim_pt && (end - start >= PMD_SIZE)) + can_reclaim_pt = true; + retry: tlb_change_page_size(tlb, PAGE_SIZE); init_rss_vec(rss); @@ -1706,12 +1738,16 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, nr = do_zap_pte_range(tlb, vma, pte, addr, end, details, rss, &force_flush, &force_break); + none_nr += count_pte_none(pte, nr); if (unlikely(force_break)) { addr += nr * PAGE_SIZE; break; } } while (pte += nr, addr += PAGE_SIZE * nr, addr != end); + if (addr == end && can_reclaim_pt && (none_nr == PTRS_PER_PTE)) + direct_reclaim = try_get_and_clear_pmd(mm, pmd, &pmdval); + add_mm_rss_vec(mm, rss); arch_leave_lazy_mmu_mode(); @@ -1738,6 +1774,13 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, goto retry; } + if (can_reclaim_pt) { + if (direct_reclaim) + free_pte(mm, start, tlb, pmdval); + else + try_to_free_pte(mm, pmd, start, tlb); + } + return addr; } diff --git a/mm/pt_reclaim.c b/mm/pt_reclaim.c new file mode 100644 index 0000000000000..fc055da40b615 --- /dev/null +++ b/mm/pt_reclaim.c @@ -0,0 +1,68 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include + +#include "internal.h" + +bool try_get_and_clear_pmd(struct mm_struct *mm, pmd_t *pmd, pmd_t *pmdval) +{ + spinlock_t *pml = pmd_lockptr(mm, pmd); + + if (!spin_trylock(pml)) + return false; + + *pmdval = pmdp_get_lockless(pmd); + pmd_clear(pmd); + spin_unlock(pml); + + return true; +} + +void free_pte(struct mm_struct *mm, unsigned long addr, struct mmu_gather *tlb, + pmd_t pmdval) +{ + pte_free_tlb(tlb, pmd_pgtable(pmdval), addr); + mm_dec_nr_ptes(mm); +} + +void try_to_free_pte(struct mm_struct *mm, pmd_t *pmd, unsigned long addr, + struct mmu_gather *tlb) +{ + pmd_t pmdval; + spinlock_t *pml, *ptl; + pte_t *start_pte, *pte; + int i; + + start_pte = pte_offset_map_rw_nolock(mm, pmd, addr, &pmdval, &ptl); + if (!start_pte) + return; + + pml = pmd_lock(mm, pmd); + if (ptl != pml) + spin_lock_nested(ptl, SINGLE_DEPTH_NESTING); + + if (unlikely(!pmd_same(pmdval, pmdp_get_lockless(pmd)))) + goto out_ptl; + + /* Check if it is empty PTE page */ + for (i = 0, pte = start_pte; i < PTRS_PER_PTE; i++, pte++) { + if (!pte_none(ptep_get(pte))) + goto out_ptl; + } + pte_unmap(start_pte); + + pmd_clear(pmd); + + if (ptl != pml) + spin_unlock(ptl); + spin_unlock(pml); + + free_pte(mm, addr, tlb, pmdval); + + return; +out_ptl: + pte_unmap_unlock(start_pte, ptl); + if (pml != ptl) + spin_unlock(pml); +}