From patchwork Tue Apr 2 12:40:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lance Yang X-Patchwork-Id: 13613887 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73DB0C6FD1F for ; Tue, 2 Apr 2024 12:40:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F116C6B0089; Tue, 2 Apr 2024 08:40:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EC1BF6B008A; Tue, 2 Apr 2024 08:40:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D3BE06B008C; Tue, 2 Apr 2024 08:40:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id B42BE6B0089 for ; Tue, 2 Apr 2024 08:40:45 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 4D22E1608A8 for ; Tue, 2 Apr 2024 12:40:45 +0000 (UTC) X-FDA: 81964550850.24.35D77BB Received: from mail-pf1-f174.google.com (mail-pf1-f174.google.com [209.85.210.174]) by imf18.hostedemail.com (Postfix) with ESMTP id 951AA1C0023 for ; Tue, 2 Apr 2024 12:40:43 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="VxjHo/tq"; spf=pass (imf18.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.210.174 as permitted sender) smtp.mailfrom=ioworker0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712061643; a=rsa-sha256; cv=none; b=2+SzYMZuiW0OCWRGo+uxhgJQe70dQXUkNobrpaUTOpk++4xM9nGp94Jy+WZZOqsMgc2Eb3 N/o3/ZDxYRUe8sD8SqbCvZ2qxlRw6ZYnhYJV7iMUvaJXXK4+465ToArWXMZQAeFqCMII5H Okgj5gfz0zEG2fHgLdUPOuwrcEuge2A= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="VxjHo/tq"; spf=pass (imf18.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.210.174 as permitted sender) smtp.mailfrom=ioworker0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712061643; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=1SLRrUoatbOMxRSqAUzYjv3kxNJrInRhmdCDhk/HGpw=; b=nVOLoYvbVFDc/lVlvGO7fSVSZt8Anjm442MsZr73aUZ2HEJwJJj1I8LIb1+hswEHXzEa2P OJXVYgdEIXukVaaNOeaiHjaGH4MXnV5nl4JGDoW98Fv2vxW3D+uncJCcLF9JwPYEXcRtY5 Av8+0gpall4nCt/ixk9uUJ3ghVvYYm0= Received: by mail-pf1-f174.google.com with SMTP id d2e1a72fcca58-6e74aa08d15so3927773b3a.1 for ; Tue, 02 Apr 2024 05:40:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1712061642; x=1712666442; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1SLRrUoatbOMxRSqAUzYjv3kxNJrInRhmdCDhk/HGpw=; b=VxjHo/tq8/ek0Bf9gtp0aSUhlPGESPKy2voZ2n11x14PkEs0/YS3/13qoMfiADBvX4 w/EZWEQaObOlNDbV6w2aqWkEPC3M24zlrcDS4LmhETuBLxzv5UIjyKhOhvqynj7k2euY BroA6y/5vqm5o0YT7744Pbu74xvesS7+40oOdYC4504dNWlvF0Ca0o5Gsg2HM4tZXUKq QNMMZxKc1NE7bLh8h6a+HWs+rEHCkHhGEc/Wz6qKznaAIRhJQcp9xvQc7sh7RkUz/vxQ 8nRk4rTh5al4ugSma3jPqgqAlebBKdwd0kx3C+Qs33NSMFeQzGrEGsghIZhdUWdjRiNa IuXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712061642; x=1712666442; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1SLRrUoatbOMxRSqAUzYjv3kxNJrInRhmdCDhk/HGpw=; b=EduamzWpJtQBW18KPKBPg42OHgCDEIw6wTF9wySI3llDBHQl8QXPgVbnv5UfHKl5lL fTMD3XKEP/LD1uSUOIzWp1LQe4GECOeFlhSosSCnn4xShjqXW3jvOgDpuhIL5ataIl+o /wave56ym0kP1GajqpLc8UDxESKt63txqC+fEuNJmN53bbBQsOIb8xjbUT7MU8AdamDD EF2o/iEbtSnxJdsxrsaoSScTOAxgJJAYB7koHeY7YH+4n6uLYB1QsnWxIOsnIaJ5JKA1 0zZOIYQdO3nzo8EFC1pch+NVVUxPnVsWtPnVe6YT4Jlv5vz1yHdQaHReC2qeqZ8s/w8B D30Q== X-Forwarded-Encrypted: i=1; AJvYcCWUwI4q1jXgrO0yh9hjM287r/5A4CT6/62NaUvFys3WSqxCkO+WNIdRLCv8fKG2q0/efGiqvqx3RQ589GBPNZJ4DuA= X-Gm-Message-State: AOJu0Yw6pmipvIZRLctZ/qzOy3/aDyMswq+OKaH3LSEgg6egjpByPrjb OcZDcTWxtLOFVngXMZWTZJJJCsrjqxepUGw5tdxwF6+8H0hrM3rE X-Google-Smtp-Source: AGHT+IFKWkuIBM1Aj0ChduZSzbmJ78sI/uhS5RIdZ7rqHTmgijXUTzIkIb5KcmhxMEPmKX6nCpXloQ== X-Received: by 2002:a05:6a20:d90f:b0:1a7:f15:16a with SMTP id jd15-20020a056a20d90f00b001a70f15016amr9929114pzb.18.1712061642455; Tue, 02 Apr 2024 05:40:42 -0700 (PDT) Received: from LancedeMBP.lan ([112.10.225.9]) by smtp.gmail.com with ESMTPSA id h5-20020a62b405000000b006e664031f10sm9708232pfn.51.2024.04.02.05.40.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 02 Apr 2024 05:40:42 -0700 (PDT) From: Lance Yang To: akpm@linux-foundation.org Cc: zokeefe@google.com, ryan.roberts@arm.com, 21cnbao@gmail.com, shy828301@gmail.com, david@redhat.com, mhocko@suse.com, fengwei.yin@intel.com, xiehuan09@gmail.com, wangkefeng.wang@huawei.com, songmuchun@bytedance.com, peterx@redhat.com, minchan@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Lance Yang Subject: [PATCH v4 1/2] mm/madvise: introduce mkold_clean_ptes() batch helper Date: Tue, 2 Apr 2024 20:40:28 +0800 Message-Id: <20240402124029.47846-2-ioworker0@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20240402124029.47846-1-ioworker0@gmail.com> References: <20240402124029.47846-1-ioworker0@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 951AA1C0023 X-Stat-Signature: 9ne7hhqaczwhixdryhjjwd7yyq9wuyah X-Rspam-User: X-HE-Tag: 1712061643-249738 X-HE-Meta: U2FsdGVkX1+1aeIs1E5pef0W+mjkm1ETG8gYAHf1/dVTWulsendWrNk3sYDQh3KJck8zqG0TlnONITvI8BhnIz8/x9QuoX1LX6YAJnVgCBM0B4MR0j4lD8/IJ01XoxUQgX45BsxmRxrqXAfl8oQPrgIs8iXVhiiqmEC1S4f9VfwDaOXliPJnFEdUWf7FOft/xP+uvJCvSgHIqJheVCJCJfi/bPbNiyhD4aMiy16afoSikZ0niv+kb5luZulB7K4BAuoJJ/kg7rf3ozG2K2RcWrhrBsLROPZlbOrKIT7W1jW2efHVtWU3CZeqL9XKsaeYgNY8NHPh1rzMSq3whgJspLXzrt3qZBbW0uIyNSh97DwxBmaPMHjojki0x1vLFcsigUx2fX3saAGngLQiFzvF1a5U/pIUrXtHNzHOm3/ZXHamuwj/eme7n8vnYk11Ng2xYhZGWZtfWaQ7D1uJ6f6OroU6jw/kcOkNP49cuNIb3zSdzPgumTFhwtsrBwUxMX4tZdqUY2xst/Zq/yBT6oc4kuFYfCyTQUESQvLQu3uJUkHsFxZLRR9/at7cQvvg/6bZOD7/XjTimE+m/JxuEXHyhOFu2T3W3jDEJxcCPkxh/ZJlax/b9ba2ubVq4Ewvvw7aaqA+99UXIRHWkrcqARcF4hsnWlwyOraze/FG4m/+7pobtPZgCA2xtIRmfgMa670DOqO57Ktea5LQrjhvq2LPZAwwomVOjox8w+DRLU0/Bx2h5zY/MIVMxP8eEHG0CXMm8OjqeEyyqWnR4sChuLw1NCZc9ZGD8UKR9M895em+H9DOCcxkUlbIHGvhlUyvHrPEjyASf/A0IlFdvLzBVQD1Y0tBD2o6L2uplifH6eDqU9/SlcWVHi+tljpXLYZhrLqSVp5jfFlXD6bPgwwlPm3tMfRhxqY7ngmEk5M58Y1c8tVtt02JsGYh1yBSGvl5Lzp0we3AOCXyWOlY38arIf4 DvTMVLjx FvN7jxbMepFauVx5RelaAfrVUX+2H17TSj0On9vDxEs2jPfSj92zX0nfkBIuGLQUL7foeAhG38XkyCC5MgQqRcIJsi/XezHeXJGrPeG0dJQFraUkLeeyk25spNnQh4Zh51LxEiPF7Ap8E/3wsZAOGMggGzfJSSd9VdU6m X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Change the code that clears young and dirty bits from the PTEs to use ptep_get_and_clear_full() and set_pte_at(), via the new mkold_clean_ptes() batch helper function. Unfortunately, the per-pte get_and_clear/modify/set approach would result in unfolding/refolding for contpte mappings on arm64. So we need to override mkold_clean_ptes() for arm64 to avoid it. Suggested-by: David Hildenbrand Suggested-by: Barry Song <21cnbao@gmail.com> Suggested-by: Ryan Roberts Signed-off-by: Lance Yang --- arch/arm64/include/asm/pgtable.h | 36 ++++++++++++++++++++++++++++++++ arch/arm64/mm/contpte.c | 10 +++++++++ include/linux/pgtable.h | 30 ++++++++++++++++++++++++++ 3 files changed, 76 insertions(+) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 9fd8613b2db2..b032c107090c 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -1086,6 +1086,27 @@ static inline bool pud_user_accessible_page(pud_t pud) } #endif +static inline void __mkold_clean_pte(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep) +{ + pte_t old_pte, pte; + + pte = __ptep_get(ptep); + do { + old_pte = pte; + pte = pte_mkclean(pte_mkold(pte)); + pte_val(pte) = cmpxchg_relaxed(&pte_val(*ptep), + pte_val(old_pte), pte_val(pte)); + } while (pte_val(pte) != pte_val(old_pte)); +} + +static inline void mkold_clean_ptes(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, unsigned int nr, int full) +{ + for (; nr-- > 0; ptep++, addr += PAGE_SIZE) + __mkold_clean_pte(vma, addr, ptep); +} + /* * Atomic pte/pmd modifications. */ @@ -1379,6 +1400,8 @@ extern void contpte_wrprotect_ptes(struct mm_struct *mm, unsigned long addr, extern int contpte_ptep_set_access_flags(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep, pte_t entry, int dirty); +extern void contpte_ptep_mkold_clean(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep); static __always_inline void contpte_try_fold(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte) @@ -1603,6 +1626,18 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma, return contpte_ptep_set_access_flags(vma, addr, ptep, entry, dirty); } +#define mkold_clean_ptes mkold_clean_ptes +static inline void mkold_clean_ptes(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, unsigned int nr, int full) +{ + pte_t orig_pte = __ptep_get(ptep); + + if (likely(!pte_valid_cont(orig_pte))) + return __mkold_clean_ptes(vma, addr, ptep, nr, full); + + return contpte_ptep_mkold_clean(vma, addr, ptep); +} + #else /* CONFIG_ARM64_CONTPTE */ #define ptep_get __ptep_get @@ -1622,6 +1657,7 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma, #define wrprotect_ptes __wrprotect_ptes #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS #define ptep_set_access_flags __ptep_set_access_flags +#define mkold_clean_ptes __mkold_clean_ptes #endif /* CONFIG_ARM64_CONTPTE */ diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c index 1b64b4c3f8bf..560622cfb2a9 100644 --- a/arch/arm64/mm/contpte.c +++ b/arch/arm64/mm/contpte.c @@ -322,6 +322,16 @@ int contpte_ptep_test_and_clear_young(struct vm_area_struct *vma, } EXPORT_SYMBOL_GPL(contpte_ptep_test_and_clear_young); +void contpte_ptep_mkold_clean(struct vm_area_struct *vma, unsigned long addr, + pte_t *ptep) +{ + ptep = contpte_align_down(ptep); + addr = ALIGN_DOWN(addr, CONT_PTE_SIZE); + + __mkold_clean_ptes(vma, addr, ptep, CONT_PTES, 0); +} +EXPORT_SYMBOL_GPL(contpte_ptep_mkold_clean); + int contpte_ptep_clear_flush_young(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) { diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index fa8f92f6e2d7..fd30779fe487 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -391,6 +391,36 @@ static inline void mkold_ptes(struct vm_area_struct *vma, unsigned long addr, } #endif +#ifndef mkold_clean_ptes +/** + * mkold_clean_ptes - Mark PTEs that map consecutive pages of the same folio + * as old and clean. + * @vma: VMA the pages are mapped into. + * @addr: Address the first page is mapped at. + * @ptep: Page table pointer for the first entry. + * @nr: Number of entries to mark old and clean. + * + * May be overridden by the architecture; otherwise, implemented as a simple + * loop over ptep_get_and_clear_full(). + * + * Note that PTE bits in the PTE range besides the PFN can differ. For example, + * some PTEs might be write-protected. + * + * Context: The caller holds the page table lock. The PTEs map consecutive + * pages that belong to the same folio. The PTEs are all in the same PMD. + */ +static inline void mkold_clean_ptes(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, unsigned int nr, int full) +{ + pte_t pte; + + for (; nr-- > 0; ptep++, addr += PAGE_SIZE) { + pte = ptep_get_and_clear_full(vma->vm_mm, addr, ptep, full); + set_pte_at(vma->vm_mm, addr, ptep, pte_mkclean(pte_mkold(pte))); + } +} +#endif + #ifndef __HAVE_ARCH_PMDP_TEST_AND_CLEAR_YOUNG #if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG) static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma, From patchwork Tue Apr 2 12:40:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lance Yang X-Patchwork-Id: 13613888 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57EADC6FD1F for ; Tue, 2 Apr 2024 12:40:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DD97A6B008C; Tue, 2 Apr 2024 08:40:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D8A0D6B0092; Tue, 2 Apr 2024 08:40:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C02766B0093; Tue, 2 Apr 2024 08:40:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id A26F66B008C for ; Tue, 2 Apr 2024 08:40:49 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 36304C05C2 for ; Tue, 2 Apr 2024 12:40:49 +0000 (UTC) X-FDA: 81964551018.11.A34CB1B Received: from mail-pf1-f177.google.com (mail-pf1-f177.google.com [209.85.210.177]) by imf10.hostedemail.com (Postfix) with ESMTP id 81EAFC0023 for ; Tue, 2 Apr 2024 12:40:47 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=JOkB2hrI; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf10.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.210.177 as permitted sender) smtp.mailfrom=ioworker0@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712061647; a=rsa-sha256; cv=none; b=VbgEy42uUHCIj7n3L94Vi8Cp9OT7WUep3Jit5KVjpCxxp4Jhede64IL/bnYeQ7TvOC3SlZ BrldlTa56ZkNo6BiC1gxtb6whmrIh10WDY303IlGq0Meep7cBiLk0fsKVh20bRUNGvcsT5 G4kL+ANx6KmKNAGg4GPvXZxJAMn3Kbk= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=JOkB2hrI; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf10.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.210.177 as permitted sender) smtp.mailfrom=ioworker0@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712061647; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=S3p6YicSBdwB37EqXf1IHFz+OfIaM1uGZTtHlNvY/Hk=; b=oQ6L6qcE0lUEXqxilAZDpqaL4wBcYvMj/E7jiSJRP5fTQqNObgJIAQB9ReXUHxT6IRjlFy hqRxlwJpKLByP1Cx4j00016+hwrL3A2QZB9aFAEXeqZIZBdXd6j6eFUX0Gyi3E7ACyv8PS h+OXQcPxtaSYtGssqZ18BbipuQ5HCUg= Received: by mail-pf1-f177.google.com with SMTP id d2e1a72fcca58-6eaf7c97738so2123282b3a.2 for ; Tue, 02 Apr 2024 05:40:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1712061646; x=1712666446; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=S3p6YicSBdwB37EqXf1IHFz+OfIaM1uGZTtHlNvY/Hk=; b=JOkB2hrIl/ar2E1yXFM28oq+NFfGjc2RF6HkEEgXLtptwmC5Q/pg96+71+GZEsUeRh MKPpAiGrq9Txb93iUIaAMi4bsbLXrtFzA+WURlpd1tN6p2UBnq/tgjJ1JtTn8jya/2WM iLi8h1p0iBQ2bWInRFWg3yfHqzQhW4D8kuEj5Zw0NSpRy3h9Q5h6XmgAiIcHCo/bhcbp UKiNaGiszC/OLp0AmeI4njHS5f8ZWWVN2/FI4+VmIQ14me2hmIekBrkj8dYtci8x+dGo 82kq5G1KVAOq/maz+c1Dtqd5nJVjq4QLdaL8RgWQlGQQC9ukrhQ8fAsQJ5WcNFRSQ9ox VJaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712061646; x=1712666446; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=S3p6YicSBdwB37EqXf1IHFz+OfIaM1uGZTtHlNvY/Hk=; b=t+IqA+jrnmCgTAynYh+lozhGUUvP8HKsv82LABiqRmhcuH2xl7o1zLyUe6PyYXnd22 wVpzt1VIQ30WymGbOe+R6hYJ0AtqNF/ElmD80f8am6PJMBJwKWU6+2akC9gMt5wV90VR 6jGx3+U89CWDBNe1bfAsL7cnnOscz3b0ZVIPgNDu92pJGP2KmcJh0kW9NJbQdUuE0XAn iWfOCIKQlC3jsRfF3AHhhGTVH445fUtqCAWh7tsNGTSsQG2eFCDwkJIII5b9I+R6/w4W PNfZRT6vPmJmAsD0p/GUmvYbtOo/UQaT4roWfgLt7fqoB/iO5c+LrMG7VNoB0GXl3hhB 6jGA== X-Forwarded-Encrypted: i=1; AJvYcCW4csswv1Bic9El8FzNnsntrJT85EGlKXmmW48pNBbBH2oZjpPk0mJem+Qfoow6aCGEjnvPjX+Gr3mCUZKxq2KfgkM= X-Gm-Message-State: AOJu0YzpFRRisECD2qJr4Rs/4uDknDSLD307E7ECibOkihlWHvviU+cl iOmOPbpVElC6lXIUbCYgqaMEZNEGOwAl+erp8obVE3GmgEe2fYql X-Google-Smtp-Source: AGHT+IGrQhKk3TJHCABKTaThz6dyl9Nv19RBANJzr9O8ZMHrEMbPmTH7ZMkiyUS2NotsW9ICpBk9TA== X-Received: by 2002:a05:6a21:394c:b0:1a6:fe92:ed5b with SMTP id ac12-20020a056a21394c00b001a6fe92ed5bmr9896532pzc.23.1712061646331; Tue, 02 Apr 2024 05:40:46 -0700 (PDT) Received: from LancedeMBP.lan ([112.10.225.9]) by smtp.gmail.com with ESMTPSA id h5-20020a62b405000000b006e664031f10sm9708232pfn.51.2024.04.02.05.40.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 02 Apr 2024 05:40:46 -0700 (PDT) From: Lance Yang To: akpm@linux-foundation.org Cc: zokeefe@google.com, ryan.roberts@arm.com, 21cnbao@gmail.com, shy828301@gmail.com, david@redhat.com, mhocko@suse.com, fengwei.yin@intel.com, xiehuan09@gmail.com, wangkefeng.wang@huawei.com, songmuchun@bytedance.com, peterx@redhat.com, minchan@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Lance Yang Subject: [PATCH v4 2/2] mm/madvise: optimize lazyfreeing with mTHP in madvise_free Date: Tue, 2 Apr 2024 20:40:29 +0800 Message-Id: <20240402124029.47846-3-ioworker0@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20240402124029.47846-1-ioworker0@gmail.com> References: <20240402124029.47846-1-ioworker0@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 81EAFC0023 X-Stat-Signature: mcai1a7wm37jgw5cjhnzcwrwknxd8afx X-HE-Tag: 1712061647-918286 X-HE-Meta: U2FsdGVkX1+vAszne4Qtc6B8d7ra4ciKzV9KviCzf3ZjcxFvWYUJLWSHSNoKlhnhLK4d9RNFfBMyWz24UhUI35QkYEFFz1e1zD2ZwvjJjShoJG0Jk3UdoJQnnI85J0fcTi/jT42rVwbPrACZPOxNc70YOYHgmm7URcHDVALYOySkOA2shGz5yzSGZg1VEdcxMHVbGV8gNGGLN/ZuJKqnxaryQHhKnp8AyI+64Ey6WIt3kmpx5mvrOXEEhJ2ItoELHNZAcGThtAhjbcAXifIu7QCl0rv+Si7c0MuLLA0toyC+nIkxENtlmvSmJPkWgNykuBCiebrtmadaespaY0NyyEabazYweTcfwFbuXrhINwPaHFPYoPpJnwUmyjGCgwbpYWLFDOfXODaYg4fbpxgcyMZ3+iXV310LmzYYfwgWwm94skJ6mrxwZW/w4vUO0AxLX1XFIRKDpYm5F1/Vfh/SzPpoQ4eUQz9YstxMSbIjqXyvm2B+cDkTcHFzNgYI1J34nmNu84LExtPx/gC7DTL69L0mLXY+KRroDfZ9615QAgVpaEdaS5sTCUskC7yyUCBUCHgYkbW8cVDiBVhA3Xo5DYWUlxVlYb8TwGwNXKFkXSbUV0Of/qlASuKbqoArDrD9vvVz7PW4/SQwUEXI3GnKPFkwWiTghp8x3GZFkXgX7EI0V5xsgCuEWebxyQ6/09AY5Jbt5jSRu7jOl3eaZzygYUr3jd+GbTWBC0wIJn//nCQ2Gq4pBgwqpOqt4K+siV1qVewFFvyPZVSC5XrDEo9upaAR0q4rRYDtth7O8M3+MfMfz6cJEOVx9ujJRFlvc9Y1cbGzTBvU/QlBI36Te/fPxrydZco/z0QoQfaTB2U+dgbAK0icRHUUg/sBkTvVQs5yJ4s8Y50/eSQ83xxiAa7bRh1OeIoNPBFVWOMyh8abxXfxx5KTrZZa0lOAoOl4ZkFcK34s0qrmK2sTZT/tcVe m3sYLN20 i8qcKVE9zBu21VU/niizJCu7Xk2Rn5ynCrudRXVRx62sj8E2Y24T99mxuo+vHkf+4tQTNXhX2549ZbpyMkQxmTLMLQ8pONXetc5Hi+FIvWPYM7/7qt3+GbTr7FyipqKHcap2OUCgZc9/yushOfQF5/gNkrlc+ksZSykM7 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This patch optimizes lazyfreeing with PTE-mapped mTHP[1] (Inspired by David Hildenbrand[2]). We aim to avoid unnecessary folio splitting if the large folio is fully mapped within the target range. If a large folio is locked or shared, or if we fail to split it, we just leave it in place and advance to the next PTE in the range. But note that the behavior is changed; previously, any failure of this sort would cause the entire operation to give up. As large folios become more common, sticking to the old way could result in wasted opportunities. On an Intel I5 CPU, lazyfreeing a 1GiB VMA backed by PTE-mapped folios of the same size results in the following runtimes for madvise(MADV_FREE) in seconds (shorter is better): Folio Size | Old | New | Change ------------------------------------------ 4KiB | 0.590251 | 0.590259 | 0% 16KiB | 2.990447 | 0.185655 | -94% 32KiB | 2.547831 | 0.104870 | -95% 64KiB | 2.457796 | 0.052812 | -97% 128KiB | 2.281034 | 0.032777 | -99% 256KiB | 2.230387 | 0.017496 | -99% 512KiB | 2.189106 | 0.010781 | -99% 1024KiB | 2.183949 | 0.007753 | -99% 2048KiB | 0.002799 | 0.002804 | 0% [1] https://lkml.kernel.org/r/20231207161211.2374093-5-ryan.roberts@arm.com [2] https://lore.kernel.org/linux-mm/20240214204435.167852-1-david@redhat.com Signed-off-by: Lance Yang --- mm/internal.h | 12 ++++- mm/madvise.c | 147 ++++++++++++++++++++++++++------------------------ mm/memory.c | 4 +- 3 files changed, 88 insertions(+), 75 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 3df06a152ff0..cdc6e2162b30 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -132,6 +132,8 @@ static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags) * first one is writable. * @any_young: Optional pointer to indicate whether any entry except the * first one is young. + * @any_dirty: Optional pointer to indicate whether any entry except the + * first one is dirty. * * Detect a PTE batch: consecutive (present) PTEs that map consecutive * pages of the same large folio. @@ -147,18 +149,20 @@ static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags) */ static inline int folio_pte_batch(struct folio *folio, unsigned long addr, pte_t *start_ptep, pte_t pte, int max_nr, fpb_t flags, - bool *any_writable, bool *any_young) + bool *any_writable, bool *any_young, bool *any_dirty) { unsigned long folio_end_pfn = folio_pfn(folio) + folio_nr_pages(folio); const pte_t *end_ptep = start_ptep + max_nr; pte_t expected_pte, *ptep; - bool writable, young; + bool writable, young, dirty; int nr; if (any_writable) *any_writable = false; if (any_young) *any_young = false; + if (any_dirty) + *any_dirty = false; VM_WARN_ON_FOLIO(!pte_present(pte), folio); VM_WARN_ON_FOLIO(!folio_test_large(folio) || max_nr < 1, folio); @@ -174,6 +178,8 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr, writable = !!pte_write(pte); if (any_young) young = !!pte_young(pte); + if (any_dirty) + dirty = !!pte_dirty(pte); pte = __pte_batch_clear_ignored(pte, flags); if (!pte_same(pte, expected_pte)) @@ -191,6 +197,8 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr, *any_writable |= writable; if (any_young) *any_young |= young; + if (any_dirty) + *any_dirty |= dirty; nr = pte_batch_hint(ptep, pte); expected_pte = pte_advance_pfn(expected_pte, nr); diff --git a/mm/madvise.c b/mm/madvise.c index bd00b83e7c50..8197effd9f14 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -321,6 +321,38 @@ static inline bool can_do_file_pageout(struct vm_area_struct *vma) file_permission(vma->vm_file, MAY_WRITE) == 0; } +static inline int madvise_folio_pte_batch(unsigned long addr, unsigned long end, + struct folio *folio, pte_t *pte, + bool *any_writable, bool *any_young, bool *any_dirty) +{ + int max_nr = (end - addr) / PAGE_SIZE; + const fpb_t fpb_flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY; + + return folio_pte_batch(folio, addr, pte, ptep_get(pte), max_nr, + fpb_flags, any_writable, any_young, any_dirty); +} + +static inline bool madvise_pte_split_folio(struct mm_struct *mm, pmd_t *pmd, + unsigned long addr, struct folio *folio, pte_t **pte, + spinlock_t **ptl) +{ + int err; + + if (!folio_trylock(folio)) + return false; + + folio_get(folio); + pte_unmap_unlock(*pte, *ptl); + *pte = NULL; + err = split_folio(folio); + folio_unlock(folio); + folio_put(folio); + + *pte = pte_offset_map_lock(mm, pmd, addr, ptl); + + return err == 0; +} + static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, struct mm_walk *walk) @@ -456,40 +488,26 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, * next pte in the range. */ if (folio_test_large(folio)) { - const fpb_t fpb_flags = FPB_IGNORE_DIRTY | - FPB_IGNORE_SOFT_DIRTY; - int max_nr = (end - addr) / PAGE_SIZE; bool any_young; - - nr = folio_pte_batch(folio, addr, pte, ptent, max_nr, - fpb_flags, NULL, &any_young); + nr = madvise_folio_pte_batch(addr, end, folio, pte, + NULL, &any_young, NULL); if (any_young) ptent = pte_mkyoung(ptent); if (nr < folio_nr_pages(folio)) { - int err; - if (folio_likely_mapped_shared(folio)) continue; if (pageout_anon_only_filter && !folio_test_anon(folio)) continue; - if (!folio_trylock(folio)) - continue; - folio_get(folio); + arch_leave_lazy_mmu_mode(); - pte_unmap_unlock(start_pte, ptl); - start_pte = NULL; - err = split_folio(folio); - folio_unlock(folio); - folio_put(folio); - if (err) - continue; - start_pte = pte = - pte_offset_map_lock(mm, pmd, addr, &ptl); + if (madvise_pte_split_folio(mm, pmd, addr, + folio, &start_pte, &ptl)) + nr = 0; if (!start_pte) break; + pte = start_pte; arch_enter_lazy_mmu_mode(); - nr = 0; continue; } } @@ -688,72 +706,59 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr, continue; /* - * If pmd isn't transhuge but the folio is large and - * is owned by only this process, split it and - * deactivate all pages. + * If we encounter a large folio, only split it if it is not + * fully mapped within the range we are operating on. Otherwise + * leave it as is so that it can be marked as lazyfree. If we + * fail to split a folio, leave it in place and advance to the + * next pte in the range. */ if (folio_test_large(folio)) { - int err; + bool any_young, any_dirty; + nr = madvise_folio_pte_batch(addr, end, folio, pte, + NULL, &any_young, &any_dirty); + if (any_young || any_dirty) + ptent = pte_mkdirty(pte_mkyoung(ptent)); - if (folio_likely_mapped_shared(folio)) - break; - if (!folio_trylock(folio)) - break; - folio_get(folio); - arch_leave_lazy_mmu_mode(); - pte_unmap_unlock(start_pte, ptl); - start_pte = NULL; - err = split_folio(folio); - folio_unlock(folio); - folio_put(folio); - if (err) - break; - start_pte = pte = - pte_offset_map_lock(mm, pmd, addr, &ptl); - if (!start_pte) - break; - arch_enter_lazy_mmu_mode(); - pte--; - addr -= PAGE_SIZE; - continue; - } + if (nr < folio_nr_pages(folio)) { + if (folio_likely_mapped_shared(folio)) + continue; - if (folio_test_swapcache(folio) || folio_test_dirty(folio)) { - if (!folio_trylock(folio)) - continue; - /* - * If folio is shared with others, we mustn't clear - * the folio's dirty flag. - */ - if (folio_mapcount(folio) != 1) { - folio_unlock(folio); + arch_leave_lazy_mmu_mode(); + if (madvise_pte_split_folio(mm, pmd, addr, + folio, &start_pte, &ptl)) + nr = 0; + if (!start_pte) + break; + pte = start_pte; + arch_enter_lazy_mmu_mode(); continue; } + } + if (!folio_trylock(folio)) + continue; + /* + * If we have a large folio at this point, we know it is fully mapped + * so if its mapcount is the same as its number of pages, it must be + * exclusive. + */ + if (folio_mapcount(folio) != folio_nr_pages(folio)) { + folio_unlock(folio); + continue; + } + if (folio_test_swapcache(folio) || folio_test_dirty(folio)) { if (folio_test_swapcache(folio) && !folio_free_swap(folio)) { folio_unlock(folio); continue; } - folio_clear_dirty(folio); - folio_unlock(folio); } + folio_unlock(folio); if (pte_young(ptent) || pte_dirty(ptent)) { - /* - * Some of architecture(ex, PPC) don't update TLB - * with set_pte_at and tlb_remove_tlb_entry so for - * the portability, remap the pte with old|clean - * after pte clearing. - */ - ptent = ptep_get_and_clear_full(mm, addr, pte, - tlb->fullmm); - - ptent = pte_mkold(ptent); - ptent = pte_mkclean(ptent); - set_pte_at(mm, addr, pte, ptent); - tlb_remove_tlb_entry(tlb, pte, addr); + mkold_clean_ptes(vma, addr, pte, nr, tlb->fullmm); + tlb_remove_tlb_entries(tlb, pte, nr, addr); } folio_mark_lazyfree(folio); } diff --git a/mm/memory.c b/mm/memory.c index 912cd738ec03..24769ecb59e5 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -989,7 +989,7 @@ copy_present_ptes(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma flags |= FPB_IGNORE_SOFT_DIRTY; nr = folio_pte_batch(folio, addr, src_pte, pte, max_nr, flags, - &any_writable, NULL); + &any_writable, NULL, NULL); folio_ref_add(folio, nr); if (folio_test_anon(folio)) { if (unlikely(folio_try_dup_anon_rmap_ptes(folio, page, @@ -1559,7 +1559,7 @@ static inline int zap_present_ptes(struct mmu_gather *tlb, */ if (unlikely(folio_test_large(folio) && max_nr != 1)) { nr = folio_pte_batch(folio, addr, pte, ptent, max_nr, fpb_flags, - NULL, NULL); + NULL, NULL, NULL); zap_present_folio_ptes(tlb, vma, folio, page, pte, ptent, nr, addr, details, rss, force_flush,