From patchwork Fri Feb 14 09:30:13 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Barry Song <21cnbao@gmail.com> X-Patchwork-Id: 13974666 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2F1C4C02198 for ; Fri, 14 Feb 2025 09:30:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 92F786B008A; Fri, 14 Feb 2025 04:30:52 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8B7FA6B0092; Fri, 14 Feb 2025 04:30:52 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 73173280001; Fri, 14 Feb 2025 04:30:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 50DEF6B008A for ; Fri, 14 Feb 2025 04:30:52 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 0DC9A4CB98 for ; Fri, 14 Feb 2025 09:30:52 +0000 (UTC) X-FDA: 83118030744.30.B7B3E60 Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) by imf07.hostedemail.com (Postfix) with ESMTP id 1DA154000B for ; Fri, 14 Feb 2025 09:30:49 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=VLg9ztDF; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf07.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.214.174 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1739525450; a=rsa-sha256; cv=none; b=aSOlUKCSTzOlcQH0fx0CN1O+jU2tg6rVYlvKc6g18S8L0lEwdZoDkwqndqwI8OtHtKMoe6 VsuhvTU0JUDPVl0NXQhYDcEq4lXNqywjOfUqpnxpXeZOGy6s8aMCWVREP5nUi6AsLvvcvT gljLLuUdniX7u5wdyXmmUdAQUQ4LtOQ= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=VLg9ztDF; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf07.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.214.174 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1739525450; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=fYY5o/q5rsrfsQ4J/LjZ+h5x5F/3ppu3Uc96Z+8j+c0=; b=DMMHu/lQ2B6KVrVewiWOXVWjuSWCyZRj9h+GC/ePxzHwmDvr3ftPrPj4lsxOhA5/u/Pfbr rIj3F0/EbO1gImfp7c5GOjdnqNQacT+Jn778AW4aHKkii9QHAY3xIXOWVV2AqgILzaPDJ+ uDDOEv/K/F/1vvslw5czSXPq7CAzJE0= Received: by mail-pl1-f174.google.com with SMTP id d9443c01a7336-220e83d65e5so19325855ad.1 for ; Fri, 14 Feb 2025 01:30:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1739525449; x=1740130249; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=fYY5o/q5rsrfsQ4J/LjZ+h5x5F/3ppu3Uc96Z+8j+c0=; b=VLg9ztDFqytCyMlVluEBjxlF6ew7GNvg9GPEnMjqMqnwvG4rEQxSMVEBBqMbYwjWs6 2cBS7d87aux90jkblGGioVa2mTZUG4cHtzK5IMd5BuLurnDb5viOlGG/kytKy8/JyMSX Vd+D3qDO8n+LVh8WqgJBgYIyeUsGsTCk+y31dBkOryUuf+3jyFh/KkwHzsgwcOLUVi0E ze37n9fg/nFFQRmi5TU6ZlC7b2gHz4ZiRQz6ecBHXaUmnRiYP9Dni+BmsWSUY6NNT18T UvdQm0doFSUdyeOf2Rn6c9vHrEn9hzSbWNkbiXaYVNFanuVkj/jZ3yOWsaKsOEJMzFW6 gX4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1739525449; x=1740130249; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fYY5o/q5rsrfsQ4J/LjZ+h5x5F/3ppu3Uc96Z+8j+c0=; b=CAsOHnekq9WFPDOs+juZ7suBTAaaIV20aMgycrlH85P4I4l2MsNGG/Q+DAq96yQR8J zg79xH5u6uJPmouGIU+8DN3hexoGXvvhyjxP+gvA5pbp0MjPAC38amdvZkxbwYQWTf+T r+lKD4cs6sxVjHtvLj0brYmCHwFrbNi8nh5J5U7P/VtNZ4pYit8YOJDKyJYL0ndEkrJq 3xABn9lt14WvxFMhuvAIqHFNLjNob//doJjv5ar0RZrLWVRs20LFCZhWs2tWh8dAy1CU 8ONheg1St6f3+QSgza+mwProoezai196nxa4CIqytUTpu21roIZICeGhybC+gH11oYu8 Z2Ug== X-Forwarded-Encrypted: i=1; AJvYcCX7Z+Ey5igQWU2MaJkSBIDxlGOsHPPB82ybPXUB+gRfX3HvCAgRrwhVzgqY1dW/QVBLFmJg43cHxw==@kvack.org X-Gm-Message-State: AOJu0YxqoTglfOiO26I7Btnw12vBdLeS5hQ51GUU71VSGI5wK4c1i7+Z lbZJYKa2iDEbBrKumucd6Z1COxQSiBagERw9Y2r2Wkld8a1qT/5F X-Gm-Gg: ASbGnctAFD4k8lHK0euDnT6BKfHyWLKrPv9Rst6M+fJE+XD3mRsa4lgX1OmYtyWaayt 9Zqp4JS/fZGUUUaYtljDBu6YtuZNTGzm6k8YnYlp+OVt0ToImgxvoxX8nRJFse9gyEkQ5BxPLZK mm+HX0ACrBZ52EnygPhL6z5oFNcEoq4/xwWrMoMzQpajSIXoxb6JSj38zQ/rtQhCIPHelQmn5zR CIXH/Hswg/BzvbaxdSmGgtgl2e1yLeeTVN0ieUP6NiM02IkCebsKKgZ6lT1meMYYT53wfR6uFYR 9b5UEoDaMe/i9WvUhGZUE7RcTog6JZw= X-Google-Smtp-Source: AGHT+IGBw+VKYQOOSQZMxOqYuMGxHuHa76uW0Guam1HUKu41ToHalQ66sh7ofZywZvlaeJqNnNB3MA== X-Received: by 2002:a17:903:2292:b0:20d:cb6:11e with SMTP id d9443c01a7336-220bbb101fbmr163987715ad.26.1739525448736; Fri, 14 Feb 2025 01:30:48 -0800 (PST) Received: from Barrys-MBP.hub ([118.92.30.135]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-220d545c814sm25440515ad.148.2025.02.14.01.30.37 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Fri, 14 Feb 2025 01:30:48 -0800 (PST) From: Barry Song <21cnbao@gmail.com> To: akpm@linux-foundation.org, linux-mm@kvack.org Cc: 21cnbao@gmail.com, baolin.wang@linux.alibaba.com, chrisl@kernel.org, david@redhat.com, ioworker0@gmail.com, kasong@tencent.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, lorenzo.stoakes@oracle.com, ryan.roberts@arm.com, v-songbaohua@oppo.com, x86@kernel.org, ying.huang@intel.com, zhengtangquan@oppo.com, Catalin Marinas , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Anshuman Khandual , Shaoqin Huang , Gavin Shan , Mark Rutland , "Kirill A. Shutemov" , Yosry Ahmed , Paul Walmsley , Palmer Dabbelt , Albert Ou , Yicong Yang , Will Deacon , Kefeng Wang Subject: [PATCH v4 2/4] mm: Support tlbbatch flush for a range of PTEs Date: Fri, 14 Feb 2025 22:30:13 +1300 Message-Id: <20250214093015.51024-3-21cnbao@gmail.com> X-Mailer: git-send-email 2.39.3 (Apple Git-146) In-Reply-To: <20250214093015.51024-1-21cnbao@gmail.com> References: <20250214093015.51024-1-21cnbao@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: 1DA154000B X-Rspamd-Server: rspam12 X-Stat-Signature: wgj1817fkwmskgt4y16deezxep43sh7r X-HE-Tag: 1739525449-885589 X-HE-Meta: U2FsdGVkX1+4/NoufS/Y/i4WPagr2FDaRfmQS3ab6y5H3Q4ezM9qnW5NGOZq9G0gtRwv74GitkVVcIdixylAxri8BljKWv8Y/FyZ9Cq39lM6pEpPc4i0K3B9PEAuZntE3q4hLb/uqmX6Jq/63uImf+jczDzWyOm8B4YwG54ikfsIrgn5OqFNRjKx3ecb6gw+qJB2f1F57k6nwxWGdYtjQuy47lncRfY5Vp2Tz9pRVJZPhslrCYKm1tVJzMWlpHtsQDHDutvwlzcCYi7Li4MXRinVz0VDwzleW0adkfWedc+q8aXrawSdcieF+hfa0Cik5r/ST9wB7bBqqNJOU+IDZHyrtJM/lvQDxO1089iAHwyT3hQk5g7PtrA40teOZU/Ne/+KDr0YH6Kpa/iK44UjH9iJND5UoEb8Ki64jAG40JPNEU6hD3WbtXgu5VO6hn3wEPZo5Xmr6UpADlxFOj4e3CUPZhX2zm/BLAl4ZnP2q+NLqzi9al37Fakvy7m3+FsKc9AfGrqteMKpOZ/ZKFCR/mwoQyUFeZxqgCtOQcvb2TwuakhcRzsvGqXgKC7y/UrAUjG6hlyMQ4hzdHXQcdC3EF4Hfl6UrSAlPx4IeR3fCjLZmRkr+Zi58Qy5i6U8UHq6Vbp31/YVCRXekNXnPX+vZ3R74v65nek6Uuoxh66w4ZF/m10Xa5RZ0HA1GwYgm/afnLenwwF7k2gEJvNoI319LCI/4AnPpO75WOSE122AbHVGVVkZDgadkiNFJ3ZY31v1taE5o8A+ri4tJ80sQR8JmR338iRc0udMj11zq80olJiFrEgszI0SCt1icKwC8r9swEskoE9wb4Bbpm5cpSaByb5np4XCcie0y+2hiPzIBP996eV6K/Ja7jvFM36cbgyu86F0rreJsYNB+ZZGvnweLkL4xlg6PJ6qzM2uOTEK+qCcnV5i8vPfGnkZo3nuKtSfq/MvAzjbCImGJ+CArG2 e1IikVB5 aVUsH6LgVwqPMixmpmqnGTlOuhBzCiOxnstjZORyHHTX9tJrYLkkQbxME3QTFAbnaaj/03OBr7DESBU8JmX3XTIh5kq5ng4EbykfcIA3LYpRQveulnTQLo+x63n9vhnFkUEl/X+GsP1yM66mUYZFeGQlkYz03g3N7cQ+kmM1t+lxBk/Y+kbk3T41JzAbs6lTEszI2i+ATQrsQSNCcDJ2gWI/xnX62UjcVPkzsw9qEv8VpymOCbSm6tRJQpe60gwIbEnNF8qm0oNndh+zsoAja4oNi7IrZK/NpXy/QMN9c2avXGxKD1G+FqxW4oBCbp4xjBMD5zc8ip/fLd6pJRsz2ePTQ2dED7XqkhEueCvmEvu6DKCBwakr/30sbyIzfFx0HdErl8Y/58o5o1Ru+wc4jV//uq4pBjiyDYl9A0f57IWXmD00ykQ0KVvmD2YPe0ehTrxOdeQhi9jMTyoCAnqiRZQoRZM/EMU7Hg85nXyi8zXkINHYNMjfL940pio7dguyKLDqzAZWThiGOVmnVbbp73YCUwz5S8OZ4CbGV8r+5AI/xo2OPt1TfCuECQpaXsAjaAjHXsmjnmC/iwOSwImAB1yuI508UoGOa6sIwclrqbhLVeWJwOT/N7SUcOpxRjjMeZSAWNi2QBwz2WFVOFmw1ffpknfqVmMfpju+rRTxCwJoGoxfCizYxk1K98AACRzbPwtyOze+BMGp6q1TlMZHM8Pamq0pM49ZLb+EK6YEV6CiEMlUpHIadP1x0Ds1XXoqi2T7z00L4C8pfceQiASpPpwOMERk+K28Em3W7wQeQhfH8QYc+5Jsd0i0LwA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Barry Song This patch lays the groundwork for supporting batch PTE unmapping in try_to_unmap_one(). It introduces range handling for TLB batch flushing, with the range currently set to the size of PAGE_SIZE. The function __flush_tlb_range_nosync() is architecture-specific and is only used within arch/arm64. This function requires the mm structure instead of the vma structure. To allow its reuse by arch_tlbbatch_add_pending(), which operates with mm but not vma, this patch modifies the argument of __flush_tlb_range_nosync() to take mm as its parameter. Cc: Catalin Marinas Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: "H. Peter Anvin" Cc: Anshuman Khandual Cc: Ryan Roberts Cc: Shaoqin Huang Cc: Gavin Shan Cc: Mark Rutland Cc: David Hildenbrand Cc: Lance Yang Cc: "Kirill A. Shutemov" Cc: Yosry Ahmed Cc: Paul Walmsley Cc: Palmer Dabbelt Cc: Albert Ou Cc: Yicong Yang Signed-off-by: Barry Song Acked-by: Will Deacon Reviewed-by: Kefeng Wang --- arch/arm64/include/asm/tlbflush.h | 23 +++++++++++------------ arch/arm64/mm/contpte.c | 2 +- arch/riscv/include/asm/tlbflush.h | 3 +-- arch/riscv/mm/tlbflush.c | 3 +-- arch/x86/include/asm/tlbflush.h | 3 +-- mm/rmap.c | 10 +++++----- 6 files changed, 20 insertions(+), 24 deletions(-) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index bc94e036a26b..b7e1920570bd 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -322,13 +322,6 @@ static inline bool arch_tlbbatch_should_defer(struct mm_struct *mm) return true; } -static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, - struct mm_struct *mm, - unsigned long uaddr) -{ - __flush_tlb_page_nosync(mm, uaddr); -} - /* * If mprotect/munmap/etc occurs during TLB batched flushing, we need to * synchronise all the TLBI issued with a DSB to avoid the race mentioned in @@ -448,7 +441,7 @@ static inline bool __flush_tlb_range_limit_excess(unsigned long start, return false; } -static inline void __flush_tlb_range_nosync(struct vm_area_struct *vma, +static inline void __flush_tlb_range_nosync(struct mm_struct *mm, unsigned long start, unsigned long end, unsigned long stride, bool last_level, int tlb_level) @@ -460,12 +453,12 @@ static inline void __flush_tlb_range_nosync(struct vm_area_struct *vma, pages = (end - start) >> PAGE_SHIFT; if (__flush_tlb_range_limit_excess(start, end, pages, stride)) { - flush_tlb_mm(vma->vm_mm); + flush_tlb_mm(mm); return; } dsb(ishst); - asid = ASID(vma->vm_mm); + asid = ASID(mm); if (last_level) __flush_tlb_range_op(vale1is, start, pages, stride, asid, @@ -474,7 +467,7 @@ static inline void __flush_tlb_range_nosync(struct vm_area_struct *vma, __flush_tlb_range_op(vae1is, start, pages, stride, asid, tlb_level, true, lpa2_is_enabled()); - mmu_notifier_arch_invalidate_secondary_tlbs(vma->vm_mm, start, end); + mmu_notifier_arch_invalidate_secondary_tlbs(mm, start, end); } static inline void __flush_tlb_range(struct vm_area_struct *vma, @@ -482,7 +475,7 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma, unsigned long stride, bool last_level, int tlb_level) { - __flush_tlb_range_nosync(vma, start, end, stride, + __flush_tlb_range_nosync(vma->vm_mm, start, end, stride, last_level, tlb_level); dsb(ish); } @@ -533,6 +526,12 @@ static inline void __flush_tlb_kernel_pgtable(unsigned long kaddr) dsb(ish); isb(); } + +static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, + struct mm_struct *mm, unsigned long start, unsigned long end) +{ + __flush_tlb_range_nosync(mm, start, end, PAGE_SIZE, true, 3); +} #endif #endif diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c index 55107d27d3f8..bcac4f55f9c1 100644 --- a/arch/arm64/mm/contpte.c +++ b/arch/arm64/mm/contpte.c @@ -335,7 +335,7 @@ int contpte_ptep_clear_flush_young(struct vm_area_struct *vma, * eliding the trailing DSB applies here. */ addr = ALIGN_DOWN(addr, CONT_PTE_SIZE); - __flush_tlb_range_nosync(vma, addr, addr + CONT_PTE_SIZE, + __flush_tlb_range_nosync(vma->vm_mm, addr, addr + CONT_PTE_SIZE, PAGE_SIZE, true, 3); } diff --git a/arch/riscv/include/asm/tlbflush.h b/arch/riscv/include/asm/tlbflush.h index 72e559934952..ce0dd0fed764 100644 --- a/arch/riscv/include/asm/tlbflush.h +++ b/arch/riscv/include/asm/tlbflush.h @@ -60,8 +60,7 @@ void flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start, bool arch_tlbbatch_should_defer(struct mm_struct *mm); void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, - struct mm_struct *mm, - unsigned long uaddr); + struct mm_struct *mm, unsigned long start, unsigned long end); void arch_flush_tlb_batched_pending(struct mm_struct *mm); void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch); diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c index 9b6e86ce3867..74dd9307fbf1 100644 --- a/arch/riscv/mm/tlbflush.c +++ b/arch/riscv/mm/tlbflush.c @@ -186,8 +186,7 @@ bool arch_tlbbatch_should_defer(struct mm_struct *mm) } void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, - struct mm_struct *mm, - unsigned long uaddr) + struct mm_struct *mm, unsigned long start, unsigned long end) { cpumask_or(&batch->cpumask, &batch->cpumask, mm_cpumask(mm)); } diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index 02fc2aa06e9e..29373da7b00a 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -279,8 +279,7 @@ static inline u64 inc_mm_tlb_gen(struct mm_struct *mm) } static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, - struct mm_struct *mm, - unsigned long uaddr) + struct mm_struct *mm, unsigned long start, unsigned long end) { inc_mm_tlb_gen(mm); cpumask_or(&batch->cpumask, &batch->cpumask, mm_cpumask(mm)); diff --git a/mm/rmap.c b/mm/rmap.c index 1320527e90cd..89e51a7a9509 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -672,7 +672,7 @@ void try_to_unmap_flush_dirty(void) (TLB_FLUSH_BATCH_PENDING_MASK / 2) static void set_tlb_ubc_flush_pending(struct mm_struct *mm, pte_t pteval, - unsigned long uaddr) + unsigned long start, unsigned long end) { struct tlbflush_unmap_batch *tlb_ubc = ¤t->tlb_ubc; int batch; @@ -681,7 +681,7 @@ static void set_tlb_ubc_flush_pending(struct mm_struct *mm, pte_t pteval, if (!pte_accessible(mm, pteval)) return; - arch_tlbbatch_add_pending(&tlb_ubc->arch, mm, uaddr); + arch_tlbbatch_add_pending(&tlb_ubc->arch, mm, start, end); tlb_ubc->flush_required = true; /* @@ -757,7 +757,7 @@ void flush_tlb_batched_pending(struct mm_struct *mm) } #else static void set_tlb_ubc_flush_pending(struct mm_struct *mm, pte_t pteval, - unsigned long uaddr) + unsigned long start, unsigned long end) { } @@ -1946,7 +1946,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, */ pteval = ptep_get_and_clear(mm, address, pvmw.pte); - set_tlb_ubc_flush_pending(mm, pteval, address); + set_tlb_ubc_flush_pending(mm, pteval, address, address + PAGE_SIZE); } else { pteval = ptep_clear_flush(vma, address, pvmw.pte); } @@ -2329,7 +2329,7 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, */ pteval = ptep_get_and_clear(mm, address, pvmw.pte); - set_tlb_ubc_flush_pending(mm, pteval, address); + set_tlb_ubc_flush_pending(mm, pteval, address, address + PAGE_SIZE); } else { pteval = ptep_clear_flush(vma, address, pvmw.pte); }