From patchwork Wed Dec 4 11:09:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13893590 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07DCDE7716B for ; Wed, 4 Dec 2024 11:11:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9B45C6B00A1; Wed, 4 Dec 2024 06:11:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 965FA6B00A2; Wed, 4 Dec 2024 06:11:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 805766B00A3; Wed, 4 Dec 2024 06:11:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 628866B00A1 for ; Wed, 4 Dec 2024 06:11:31 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 27CAEC0EC3 for ; Wed, 4 Dec 2024 11:11:31 +0000 (UTC) X-FDA: 82857009732.17.8B38D3F Received: from mail-pl1-f180.google.com (mail-pl1-f180.google.com [209.85.214.180]) by imf05.hostedemail.com (Postfix) with ESMTP id F406B10000E for ; Wed, 4 Dec 2024 11:11:01 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=jrUgwzBW; spf=pass (imf05.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.214.180 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=quarantine) header.from=bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733310679; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=qoapVpNXl28ZG5UuJ7jnDD6Hy269eKhZbzIlX0dbMls=; b=scWJfeqjishgYCdb+9xDAzpkbxzwL3IyFrVGWCKSUcSiiCIdZEnBxk729CspQ+UFn1mGs5 m1EZ/Ay+jhEYKGSai6YdRKb0uxXOa/mMMQYELEChaQDbu/NLlJBwTsxTsI1DNQysdhp+V5 vMG/EbyUR18fXzt98dpVT73g97LsbR8= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=jrUgwzBW; spf=pass (imf05.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.214.180 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=quarantine) header.from=bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1733310679; a=rsa-sha256; cv=none; b=Z2mH4zrmEcoHU8n+xVyzXsWKXcbNHsmduHLIo0/l6C7OXNr6FE/8iHtYshAhGM719mF3ZG 7xI/4OQDtq4FJGFP0+9pDmEhdCHjy27C8DTUPGsfs3DdwPJxN1Va2Km6n4ex0eeWFvGFkH /BAGiVGtz+tbzvcE++ocxJTrJ+kNhng= Received: by mail-pl1-f180.google.com with SMTP id d9443c01a7336-215c54e5f24so15640635ad.2 for ; Wed, 04 Dec 2024 03:11:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1733310688; x=1733915488; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qoapVpNXl28ZG5UuJ7jnDD6Hy269eKhZbzIlX0dbMls=; b=jrUgwzBWuMfHupOvhZTL33IuK+Q3H/A4cAOSGazhPgT92JSKznGrWwCUTtVeyaZx9D IZs3Uddmsc/DPbtLv34auN+Eha2AKoJQqkXS3AMjzOTV4bZcNQtOFV01MUFGfSFL2WLe hnrxu4qkkcnYEWF3h487ovV5lfKuCjeAdEKGxxs+fCIApmvbjMQSwyc8Fcgv/dQclEC8 Hpd8vfWqFqTaP8f+bwWmRQ4wzMW+G3KQT8i4quB/20wPewxXfynTDGGnVGNrs6gxstSQ Hj3GcghsSGrNJufnhK1kcIFwfF/f2iHy1+dGxVgrdK+tOAZ5difFeA4mOdiHhKWcdSzs sUyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733310688; x=1733915488; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qoapVpNXl28ZG5UuJ7jnDD6Hy269eKhZbzIlX0dbMls=; b=f08ugvgQ5YAC5lPoQbNj5N6b4C1rutTkKO6GAxpJOKHHe3Pat1CM27O2rrSaM0lMPA XHtKSRJO3EyNXJEuF/Ltfe+Hsew0pq2yX0W3UaB6UCs6TxD1fTy0E+gowQkYIjnoxjLL xa1SFdaGGgfmKwjGZk6QGvHtjbK6jt12JqJAqyxgNWwx42yOrgP9QsHnkCRuKZhYU1J0 vALcRdY5Oang/WPe0XwxP5Rloece5aci3DVQrlk3VfW2foc1f4zSUq5+bkvRlEo5tK2i KoF130G48afgd+o1PX6LcHLy1pCddXhUgdtNCON9yo6deM68opewLkNFvLfGxBGjuxqS 7hAQ== X-Forwarded-Encrypted: i=1; AJvYcCU7ZY6rnBPAJDAwx+iozVCd84Kf62WXYLhwuMmyAFJtLwXLaI2+OzmGUAE5FiGXRQrfd6SERLwAAQ==@kvack.org X-Gm-Message-State: AOJu0YxGN6Tve73011az9HvgDY3WZc694g0BirVMqholJC3WUadVGjBm ky65iSiTmrdBhV1XbAnpPpHQJLB+OFeqWDsjpHWF/6GCIzwQzjU/4icTLDOQ7rkWhiThB40mZ43 S X-Gm-Gg: ASbGnctvqS/w8k2nNTluuQszp0l5sBcpiEWHqdi++eBj0Ae6R+Eq1S73Kl7akVHEIUe +ZjoAJji+1tlf3mO056U/oNvsnowMosVKwChkOqbsBCoIY/2DkdPu/adc3HyYFgoouAbMyn+XQN d3gkmHCLkrs8xoIZii5h0ca1UA475Gzg1WyLYnjJBsaDxOA7ed82x6d0qUfADVxhTPuifb8lqUd eRajA7LAeG9tibfQz/1QUlrLfNvZpjp18SfZ+1M3T8OXLadC7Dp6zj4wgfTBAa4raPIYfwNCVG/ RHkUFQ7LWehV2SE= X-Google-Smtp-Source: AGHT+IEfEQGVP8Gt1483n1zkgrWVjfWQwHKtLl/KnEOEh9tvsaCxkNv7R/GEU8g81pmpLtGbqUrK9w== X-Received: by 2002:a17:903:2442:b0:215:a98c:aabb with SMTP id d9443c01a7336-215bd0e66a5mr90103075ad.24.1733310688006; Wed, 04 Dec 2024 03:11:28 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.148]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-21527515731sm107447495ad.192.2024.12.04.03.11.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Dec 2024 03:11:27 -0800 (PST) From: Qi Zheng To: david@redhat.com, jannh@google.com, hughd@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, peterx@redhat.com, akpm@linux-foundation.org Cc: mgorman@suse.de, catalin.marinas@arm.com, will@kernel.org, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, x86@kernel.org, lorenzo.stoakes@oracle.com, zokeefe@google.com, rientjes@google.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Qi Zheng Subject: [PATCH v4 09/11] mm: pgtable: reclaim empty PTE page in madvise(MADV_DONTNEED) Date: Wed, 4 Dec 2024 19:09:49 +0800 Message-Id: <92aba2b319a734913f18ba41e7d86a265f0b84e2.1733305182.git.zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Stat-Signature: f77zhymmhsmfgbzgfi4mz1nia1yxigxc X-Rspamd-Queue-Id: F406B10000E X-Rspam-User: X-HE-Tag: 1733310661-228700 X-HE-Meta: U2FsdGVkX18epXr/oQ/okWtnZw8msq6PYZFp79vrb0A6cjz121F/uff8bZTy/p37uOzwih9w3UaxGpKl314c29aet5+ewvbcZj+OKjVi9RUiplrJXe46JIAp5E3QVAUevEyTBdta6Fd2fhbHcRVfk0QnvsQ2M/0AOOECapob5dBVizaiTuvnQK37HhM85ZVZe1URZdhHet9tJU8bNRQ7gRlqE/P+IJtsH1mEwpSNYXNVDqj8rOicEK96v8UbGsDYrTFiF5BI/RayuI2pZvE/bPkLgupAdu4PCzPWm5zW99ftzi6IAsNE16Z8zZSCZQrw/aXaYYUeJgmF9AIHZvSK1ot+SYzVb/USoGCgvGrGcx8J8TKw+M7PziPpQvx7RLw8mQt/y4qduEu5XNV9FBO/L/A+Ua7laHWy06uvvfQv1oHkqQ7WiLHzia0yOk14kbrtLNvziJIr18edFRSF5X8g9cykDEv8Cy7wReqjoN5ZcDIUpb2fvwYB0Xrc4fl0xEbYpVrqjq73hH687/9izXwgJnhaicLEPoyDTxMHHOFWxpb6L6VheOR2CgzaeNENaHBZetRgoMdf6vCo+fwJsfQ6m2mq3g30HnG8g4FcRJ4CBZCgNGIQNgxGwRMcR2gohFhXI4scjyzr0zQoZ/LZZnkDW3dec404aAUy2Rk1OtfEttUPq8QWQLF/WqLd7ghDGEeYIMBSLmXJ6w8td+neNH+jPD5EziwHxkoLKI37fD/ZdnOffgrNWWne6PpAv8dgEz4KdQU4dJiI88mcdLy/ZKwTienn7nFIez6gwSf1/MqK5OP3eUhFZVGqZYTvH4sJpPSm7savn50j0u5b56cGqjDtBRvJHyyq8b8QoXXxTRnC7yVZd57VXQwfgA/k8j2fb61yFiKnOoXU0iiDs8Ihs01kQlGtIK5CL6+Jw5zXBgfQlXqVyTbSEu4PxsEplumpxA7HbqvbFfXOJPl57uUwsFH 3m7MKfPf KGD27iZAlNEMnaGyDkYQpfX5MfTZfdvvWj4/72/7Fj+yP9fUI5H3CopQZCCV/crYLwLmkOdqIFy3JN/JrASWzxmuKOwgr75iqyuEOdd3bcCP88I6xIaLhJGoPDn5vmUBoRTzyu3e7SAH8HBlep+jc78dEnWjL4gbLkLqa2t+WML4lYKj1FPkwpDzMxdvQ7XFA5RqvAnG9ntharJOHDOJjP0yQnlMONeniwOKHjmNO56whWvW6dIGR2+wBXOhUzWdpRWh4lkTdUz1hA4HlaI/OzNj5aiPNnwAIEyN6vmvVEf30D3sujMin2OEGueDrMY/kYfVTpsLxPw5bf/xOzpOUbtT1oqCjLlTQyi59iYd98MDnsDI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Now in order to pursue high performance, applications mostly use some high-performance user-mode memory allocators, such as jemalloc or tcmalloc. These memory allocators use madvise(MADV_DONTNEED or MADV_FREE) to release physical memory, but neither MADV_DONTNEED nor MADV_FREE will release page table memory, which may cause huge page table memory usage. The following are a memory usage snapshot of one process which actually happened on our server: VIRT: 55t RES: 590g VmPTE: 110g In this case, most of the page table entries are empty. For such a PTE page where all entries are empty, we can actually free it back to the system for others to use. As a first step, this commit aims to synchronously free the empty PTE pages in madvise(MADV_DONTNEED) case. We will detect and free empty PTE pages in zap_pte_range(), and will add zap_details.reclaim_pt to exclude cases other than madvise(MADV_DONTNEED). Once an empty PTE is detected, we first try to hold the pmd lock within the pte lock. If successful, we clear the pmd entry directly (fast path). Otherwise, we wait until the pte lock is released, then re-hold the pmd and pte locks and loop PTRS_PER_PTE times to check pte_none() to re-detect whether the PTE page is empty and free it (slow path). For other cases such as madvise(MADV_FREE), consider scanning and freeing empty PTE pages asynchronously in the future. The following code snippet can show the effect of optimization: mmap 50G while (1) { for (; i < 1024 * 25; i++) { touch 2M memory madvise MADV_DONTNEED 2M } } As we can see, the memory usage of VmPTE is reduced: before after VIRT 50.0 GB 50.0 GB RES 3.1 MB 3.1 MB VmPTE 102640 KB 240 KB Signed-off-by: Qi Zheng --- include/linux/mm.h | 1 + mm/Kconfig | 15 ++++++++++ mm/Makefile | 1 + mm/internal.h | 19 +++++++++++++ mm/madvise.c | 7 ++++- mm/memory.c | 21 ++++++++++++-- mm/pt_reclaim.c | 71 ++++++++++++++++++++++++++++++++++++++++++++++ 7 files changed, 132 insertions(+), 3 deletions(-) create mode 100644 mm/pt_reclaim.c diff --git a/include/linux/mm.h b/include/linux/mm.h index 12fb3b9334269..8f3c824ee5a77 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2319,6 +2319,7 @@ extern void pagefault_out_of_memory(void); struct zap_details { struct folio *single_folio; /* Locked folio to be unmapped */ bool even_cows; /* Zap COWed private pages too? */ + bool reclaim_pt; /* Need reclaim page tables? */ zap_flags_t zap_flags; /* Extra flags for zapping */ }; diff --git a/mm/Kconfig b/mm/Kconfig index 84000b0168086..7949ab121070f 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1301,6 +1301,21 @@ config ARCH_HAS_USER_SHADOW_STACK The architecture has hardware support for userspace shadow call stacks (eg, x86 CET, arm64 GCS or RISC-V Zicfiss). +config ARCH_SUPPORTS_PT_RECLAIM + def_bool n + +config PT_RECLAIM + bool "reclaim empty user page table pages" + default y + depends on ARCH_SUPPORTS_PT_RECLAIM && MMU && SMP + select MMU_GATHER_RCU_TABLE_FREE + help + Try to reclaim empty user page table pages in paths other than munmap + and exit_mmap path. + + Note: now only empty user PTE page table pages will be reclaimed. + + source "mm/damon/Kconfig" endmenu diff --git a/mm/Makefile b/mm/Makefile index dba52bb0da8ab..850386a67b3e0 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -146,3 +146,4 @@ obj-$(CONFIG_GENERIC_IOREMAP) += ioremap.o obj-$(CONFIG_SHRINKER_DEBUG) += shrinker_debug.o obj-$(CONFIG_EXECMEM) += execmem.o obj-$(CONFIG_TMPFS_QUOTA) += shmem_quota.o +obj-$(CONFIG_PT_RECLAIM) += pt_reclaim.o diff --git a/mm/internal.h b/mm/internal.h index 74713b44bedb6..3958a965e56e1 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1545,4 +1545,23 @@ int walk_page_range_mm(struct mm_struct *mm, unsigned long start, unsigned long end, const struct mm_walk_ops *ops, void *private); +/* pt_reclaim.c */ +bool try_get_and_clear_pmd(struct mm_struct *mm, pmd_t *pmd, pmd_t *pmdval); +void free_pte(struct mm_struct *mm, unsigned long addr, struct mmu_gather *tlb, + pmd_t pmdval); +void try_to_free_pte(struct mm_struct *mm, pmd_t *pmd, unsigned long addr, + struct mmu_gather *tlb); + +#ifdef CONFIG_PT_RECLAIM +bool reclaim_pt_is_enabled(unsigned long start, unsigned long end, + struct zap_details *details); +#else +static inline bool reclaim_pt_is_enabled(unsigned long start, unsigned long end, + struct zap_details *details) +{ + return false; +} +#endif /* CONFIG_PT_RECLAIM */ + + #endif /* __MM_INTERNAL_H */ diff --git a/mm/madvise.c b/mm/madvise.c index 0ceae57da7dad..49f3a75046f63 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -851,7 +851,12 @@ static int madvise_free_single_vma(struct vm_area_struct *vma, static long madvise_dontneed_single_vma(struct vm_area_struct *vma, unsigned long start, unsigned long end) { - zap_page_range_single(vma, start, end - start, NULL); + struct zap_details details = { + .reclaim_pt = true, + .even_cows = true, + }; + + zap_page_range_single(vma, start, end - start, &details); return 0; } diff --git a/mm/memory.c b/mm/memory.c index 36a59bea289d1..1fc1f14839916 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1436,7 +1436,7 @@ copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma) static inline bool should_zap_cows(struct zap_details *details) { /* By default, zap all pages */ - if (!details) + if (!details || details->reclaim_pt) return true; /* Or, we zap COWed pages only if the caller wants to */ @@ -1710,12 +1710,15 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, struct zap_details *details) { bool force_flush = false, force_break = false; - bool any_skipped = false; struct mm_struct *mm = tlb->mm; int rss[NR_MM_COUNTERS]; spinlock_t *ptl; pte_t *start_pte; pte_t *pte; + pmd_t pmdval; + unsigned long start = addr; + bool can_reclaim_pt = reclaim_pt_is_enabled(start, end, details); + bool direct_reclaim = false; int nr; retry: @@ -1728,17 +1731,24 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, flush_tlb_batched_pending(mm); arch_enter_lazy_mmu_mode(); do { + bool any_skipped = false; + if (need_resched()) break; nr = do_zap_pte_range(tlb, vma, pte, addr, end, details, rss, &force_flush, &force_break, &any_skipped); + if (any_skipped) + can_reclaim_pt = false; if (unlikely(force_break)) { addr += nr * PAGE_SIZE; break; } } while (pte += nr, addr += PAGE_SIZE * nr, addr != end); + if (can_reclaim_pt && addr == end) + direct_reclaim = try_get_and_clear_pmd(mm, pmd, &pmdval); + add_mm_rss_vec(mm, rss); arch_leave_lazy_mmu_mode(); @@ -1765,6 +1775,13 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, goto retry; } + if (can_reclaim_pt) { + if (direct_reclaim) + free_pte(mm, start, tlb, pmdval); + else + try_to_free_pte(mm, pmd, start, tlb); + } + return addr; } diff --git a/mm/pt_reclaim.c b/mm/pt_reclaim.c new file mode 100644 index 0000000000000..6540a3115dde8 --- /dev/null +++ b/mm/pt_reclaim.c @@ -0,0 +1,71 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include + +#include "internal.h" + +bool reclaim_pt_is_enabled(unsigned long start, unsigned long end, + struct zap_details *details) +{ + return details && details->reclaim_pt && (end - start >= PMD_SIZE); +} + +bool try_get_and_clear_pmd(struct mm_struct *mm, pmd_t *pmd, pmd_t *pmdval) +{ + spinlock_t *pml = pmd_lockptr(mm, pmd); + + if (!spin_trylock(pml)) + return false; + + *pmdval = pmdp_get_lockless(pmd); + pmd_clear(pmd); + spin_unlock(pml); + + return true; +} + +void free_pte(struct mm_struct *mm, unsigned long addr, struct mmu_gather *tlb, + pmd_t pmdval) +{ + pte_free_tlb(tlb, pmd_pgtable(pmdval), addr); + mm_dec_nr_ptes(mm); +} + +void try_to_free_pte(struct mm_struct *mm, pmd_t *pmd, unsigned long addr, + struct mmu_gather *tlb) +{ + pmd_t pmdval; + spinlock_t *pml, *ptl; + pte_t *start_pte, *pte; + int i; + + pml = pmd_lock(mm, pmd); + start_pte = pte_offset_map_rw_nolock(mm, pmd, addr, &pmdval, &ptl); + if (!start_pte) + goto out_ptl; + if (ptl != pml) + spin_lock_nested(ptl, SINGLE_DEPTH_NESTING); + + /* Check if it is empty PTE page */ + for (i = 0, pte = start_pte; i < PTRS_PER_PTE; i++, pte++) { + if (!pte_none(ptep_get(pte))) + goto out_ptl; + } + pte_unmap(start_pte); + + pmd_clear(pmd); + + if (ptl != pml) + spin_unlock(ptl); + spin_unlock(pml); + + free_pte(mm, addr, tlb, pmdval); + + return; +out_ptl: + if (start_pte) + pte_unmap_unlock(start_pte, ptl); + if (ptl != pml) + spin_unlock(pml); +}