From patchwork Mon Jan 13 03:38:59 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Barry Song <21cnbao@gmail.com> X-Patchwork-Id: 13936679 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D04F9E77188 for ; Mon, 13 Jan 2025 03:39:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 643C36B0089; Sun, 12 Jan 2025 22:39:42 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5F29F6B008A; Sun, 12 Jan 2025 22:39:42 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 41EE66B008C; Sun, 12 Jan 2025 22:39:42 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 1D9DD6B0089 for ; Sun, 12 Jan 2025 22:39:42 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 9BF9EA1B06 for ; Mon, 13 Jan 2025 03:39:41 +0000 (UTC) X-FDA: 83001024162.16.FC236A4 Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) by imf20.hostedemail.com (Postfix) with ESMTP id AECCA1C0007 for ; Mon, 13 Jan 2025 03:39:39 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=L6LkcFsO; spf=pass (imf20.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.214.181 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736739579; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=eULRCDFJ0DJzA6lc7vvdVA8dxS/L+jVPc2aRWImXMe0=; b=paweJfwFXUxSg62cfi8WMDGrM1zBHj3vyfCCSYgdoWJwAEEKrMeRHUXFyhb9vGigNfVt86 w6RgLJRhWln1Cm0yAOh4ZtClo4MW69kiPUNlVgS+uwH25ra2LenidsLDPwJa+fBSmK+3fc UZ8hiXis2X5ax+hGjOFCCO41oq0288k= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736739579; a=rsa-sha256; cv=none; b=vNLZVwKgZYCR8lgo5FYhI0pQyjhkfiljDew0UwdvytsF1L3DdqAYKLNk7aGZyniioiHyUw sGOzZMf4pmPchFpihsPJSYgVVy6TIcjgbqmgp8d4/gx6onMjt63u3DoyoNzw//12B583BV zrp686kr+t1+A7D5U3ZG3ePduhvtLYE= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=L6LkcFsO; spf=pass (imf20.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.214.181 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-pl1-f181.google.com with SMTP id d9443c01a7336-219f8263ae0so60424195ad.0 for ; Sun, 12 Jan 2025 19:39:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1736739578; x=1737344378; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=eULRCDFJ0DJzA6lc7vvdVA8dxS/L+jVPc2aRWImXMe0=; b=L6LkcFsO5hCDRfzBoK8nxSUE+1dCXl4DLO5SpyjVdNDONurAc4IE8//ckYs5FgqiFf LWH0HNHnXNmvOUOuUeyM6KY1Y0oHbLq5xrtzEkZoi4jGj8r8OCYDs/o/yloeK5y2rplY n/62WUBBvp1m/ntPglnNgcgdM7OGjsjC0OOUvOJkttURhErPlej4tGg4dAzj0QhLXLVa 0gQB0BSAS7yHQWJkpQNampHWBRksC+g33s0DDA8zj0P00avaJ2aMetnxdujR4FyFQCJo JbbKOAeFUdd2PVLFN/GtqJ0LAw2wlEuT20agwOgIeFnlGgp6pNRdowGihWqN+bhWcJvJ L3QQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736739578; x=1737344378; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=eULRCDFJ0DJzA6lc7vvdVA8dxS/L+jVPc2aRWImXMe0=; b=G8ACp6zDmRrUukEqv6RnwGGUvgQYD9nEiqhow1p36uTCLorqqy3ViEGAqXNA/z++vY xE17EFQ78VXVvlthv7fDjZ0xTVIgH6ZHHhbxx8Zuo9hJNirT0Qlz32qCGoq/ugKNuIrc AlphinNTnHzSWhD7erhrWQXCXhrsSqrP2rqMOXY9OC9TBWYGgh2x6VFQ6Lgq0mah2C8a Pa8EDoRk1Ll9oXedGxGHP0wE4PKV23jmU6IRQs5+VzN1LgoladxlZSJUeNZ1odQgGl3r jBf/NLDUgh/FCyyAD0TE5K0I9BnEtfI+FoAwvkAmPQp4rV2ctqzMV0LZzKbYBr3cx8+q ZQxQ== X-Forwarded-Encrypted: i=1; AJvYcCWmniv4fG+u6uFpKhGiO1kn6pWBZFrskPDLdjnaj3UugRK0KNezbrwP1ovMj1BFS8JA1OlRLoQFUg==@kvack.org X-Gm-Message-State: AOJu0YzWYcW2errvxhI3sZ2fxkmkuHTTmMofz/ySgFbhDKReakbK9h+l /AckQDeMU3KkUcKPs2wHX+63we0M3iEY5Y1ofay3suNczwOJztql X-Gm-Gg: ASbGncvp7aUTx+YlMdIa6oVukw4RDeZCm7tzCdnmiJn3wbQJF28tKGJDiQT0jAkb7oj K77T1X3OP3urHw0gILoNk69mW49rD+4z1sW/Ps+LFjKsjxJYHDtn/V8lqwMHRg1Yq1i2pU8Ik+Z PENX+i0hARnK1vwlAS76y+Dp/TC6I86jctF1M87acIDzjqGx719Z0WWGawB9G1SaEzSfU6pLaN4 rJruVVRIaVKe/vMBLA4fw/wzHfc/xLOIW+51ib1uzWzmCSmxgDwgR888NKmsW/nY5e2r8nXBH8o k69H6eea X-Google-Smtp-Source: AGHT+IGyqwjrrsZeQxoPWWpl+Lu4WtU0rXovoTE/PMvKn9xWPcPyfiEtn9aeXvMxYF6gWs977A6wvQ== X-Received: by 2002:a17:902:c951:b0:215:bc30:c952 with SMTP id d9443c01a7336-21a83f4298fmr238697265ad.6.1736739578393; Sun, 12 Jan 2025 19:39:38 -0800 (PST) Received: from Barrys-MBP.hub ([2407:7000:af65:8200:39b5:3f0b:acf3:9158]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-21a9f25aabfsm44368405ad.246.2025.01.12.19.39.27 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Sun, 12 Jan 2025 19:39:38 -0800 (PST) From: Barry Song <21cnbao@gmail.com> To: akpm@linux-foundation.org, linux-mm@kvack.org Cc: baolin.wang@linux.alibaba.com, chrisl@kernel.org, david@redhat.com, ioworker0@gmail.com, kasong@tencent.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, ryan.roberts@arm.com, v-songbaohua@oppo.com, x86@kernel.org, linux-riscv@lists.infradead.org, ying.huang@intel.com, zhengtangquan@oppo.com, lorenzo.stoakes@oracle.com, Catalin Marinas , Will Deacon , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Anshuman Khandual , Shaoqin Huang , Gavin Shan , Kefeng Wang , Mark Rutland , "Kirill A. Shutemov" , Yosry Ahmed , Paul Walmsley , Palmer Dabbelt , Albert Ou , Yicong Yang Subject: [PATCH v2 2/4] mm: Support tlbbatch flush for a range of PTEs Date: Mon, 13 Jan 2025 16:38:59 +1300 Message-Id: <20250113033901.68951-3-21cnbao@gmail.com> X-Mailer: git-send-email 2.39.3 (Apple Git-146) In-Reply-To: <20250113033901.68951-1-21cnbao@gmail.com> References: <20250113033901.68951-1-21cnbao@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: AECCA1C0007 X-Stat-Signature: h9sit7xwt46gm6nqu9xc7auek3mxgjdo X-Rspam-User: X-HE-Tag: 1736739579-131639 X-HE-Meta: U2FsdGVkX19S87SuM+eed8sem2IhbhPPpNUB6VlSWq+OD6zEq2aAFB6kaTy5LZkq/Ph7pEvxH0S5sxnAl/X7rYEg747ymfgOQQCX+hAWJnC0LQPvhL+JDBmfHvNZ74oebIMi1lu6x54KtMHt0xJ/JqwLXm5neTChWjJ9kvwZqcbIYXYr2+Ea9U7nPX72zfrexNVftsXvceS83PiQOLEYftqaQiGxBcV2Z92iHp9Fg44jC51MCpNcIs+BSPAej0iOB+KtF+1QVr1H7y90fIl1lm4dJzaTUreTP2P/rg4nCY7uSl8axCHuTpQhnjVr1RqPtCdLwz5J9dCT3GZ/moY6ZWhKev+RTmY+w4Vaw+sVlV5KA5H9b6ZEJUm+SEFHRAumx5YxtlBs7nXsV9VhZCqb3kh3givUEgkzbJiZtw6aIWEoKNemnIrME+XLnSYS60JwsrCmZC9eiDTbi4Dv7UVzw/k1n4/FpXA08fkgUmcWUZHU+L5Jg3Zq3X9LI1pGFLg6RyZ3ASnHQcvJf0TkMsHz6OJpIAxe5H/wo43/KtZ6yjp/2flt1972b9HebxdBhiLo8U0p9ASrPzhfploXE2RTIl6/wB0jCWgBWpB8Rvj6ZzudkVvxtMN5htkc3O8jMgowB/gQ0vip+gdBiHNqWXZ2ljC+iGUtRfQziD3cCrQk2AvMdUIb1TkMnRWnUgm8HipQXATDM3TF9/tb94YV9egMdyo6p8jyOVsW0UrnawDCcTvR84WyUAj+yZr4+laufM/38ysrdHq+5h4k4J+PZ7++cQLicH5zjx9VfWU7ym4MAkw1NrCp451V/oAeT/LnipfaVMsYuYPsNipvYHZuuU44hdNKxZHIyrcDXIEG51B3kCOj2swqFTnkML4YsIvopzoMYQ+672YeR8t8UTSC5NiAuldTmXJMkD8ZY6rRiRMxTkYTcU13c2ZlbLyE+xcxvuE90spxVQzJ8/E5W8hj3w+ m3miWQOc +cgUkd+IPpeyTaMMmtVYeWHHKawNyzerAUBVL/iTQmCMDciZ2t5GPAnL5Wh2OBkg1/IeO9/6isQ4SBkvTrMSjc0cG7aGNNQwbisJHsWtxN0PLvB+B+0YgS6rvvABHQQN44ztqtDuJP16+7e/YiDZAo+6tacwHUlVANruHFX/+dV/f4Qb6JCjH4XFymGejWvdTMXgyfAwYC2p/3As5CTnV+gXhBVmRU+djQPDWxHOOyUFhtTrwM0pJdGFGYtEDp+vH66WjyIsupOObBaIQ7evMB2IQnv89L4r4tM0RlGAZV1pjLTkVMmFTeS04yYR7fmaudv0aoIFYVVCn594P2DCRzv5Wphf/21q41qIRJoYXjXw+q0Xrf4MqiN014KipqB4rHfRZeLxDeDt1KPpeUbEFPvQbOVhdXTIzAG00lrUt7ArTfL+9r8KOKB7QKKBI0KzKXYm1gWqnjTqNwxa4mpYS1ilEEEY/O/Gpxvi8bAaF3pphRhOVjGDg8+blTgEq3lgzszb0eRczdhRddogWDCpljvfhu02RkzXnLeVoDa06yO26VnU3gWB9exutHl79HENKh7B80CdE0aBIgd9Y8oKRWX3eOTt9xGROBTleXkAqIV5Yek3Y/xZSoj/dpsBUew7lMKdaUpb5z4uZniKppb9DefN5zjBYUTqG6Anp6hwgODdAOkEmojgEJViCVS7zWhK/1cs1DIK2TefO93q9nJuqyRIgowWj8psSI1520ayx9+FJfzCf+zvzztCgj740juA+K8s9DAXHW2MdC03RX9TQElpPz5cCH69ARD5wyMTiN/Aey5SIojO/M9h+Semvr3MUOuwc X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Barry Song This is a preparatory patch to support batch PTE unmapping in `try_to_unmap_one`. It first introduces range handling for `tlbbatch` flush. Currently, the range is always set to the size of PAGE_SIZE. Cc: Catalin Marinas Cc: Will Deacon Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: "H. Peter Anvin" Cc: Anshuman Khandual Cc: Ryan Roberts Cc: Shaoqin Huang Cc: Gavin Shan Cc: Kefeng Wang Cc: Mark Rutland Cc: David Hildenbrand Cc: Lance Yang Cc: "Kirill A. Shutemov" Cc: Yosry Ahmed Cc: Paul Walmsley Cc: Palmer Dabbelt Cc: Albert Ou Cc: Yicong Yang Signed-off-by: Barry Song --- arch/arm64/include/asm/tlbflush.h | 26 ++++++++++++++------------ arch/arm64/mm/contpte.c | 2 +- arch/riscv/include/asm/tlbflush.h | 3 ++- arch/riscv/mm/tlbflush.c | 3 ++- arch/x86/include/asm/tlbflush.h | 3 ++- mm/rmap.c | 12 +++++++----- 6 files changed, 28 insertions(+), 21 deletions(-) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index bc94e036a26b..f34e4fab5aa2 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -322,13 +322,6 @@ static inline bool arch_tlbbatch_should_defer(struct mm_struct *mm) return true; } -static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, - struct mm_struct *mm, - unsigned long uaddr) -{ - __flush_tlb_page_nosync(mm, uaddr); -} - /* * If mprotect/munmap/etc occurs during TLB batched flushing, we need to * synchronise all the TLBI issued with a DSB to avoid the race mentioned in @@ -448,7 +441,7 @@ static inline bool __flush_tlb_range_limit_excess(unsigned long start, return false; } -static inline void __flush_tlb_range_nosync(struct vm_area_struct *vma, +static inline void __flush_tlb_range_nosync(struct mm_struct *mm, unsigned long start, unsigned long end, unsigned long stride, bool last_level, int tlb_level) @@ -460,12 +453,12 @@ static inline void __flush_tlb_range_nosync(struct vm_area_struct *vma, pages = (end - start) >> PAGE_SHIFT; if (__flush_tlb_range_limit_excess(start, end, pages, stride)) { - flush_tlb_mm(vma->vm_mm); + flush_tlb_mm(mm); return; } dsb(ishst); - asid = ASID(vma->vm_mm); + asid = ASID(mm); if (last_level) __flush_tlb_range_op(vale1is, start, pages, stride, asid, @@ -474,7 +467,7 @@ static inline void __flush_tlb_range_nosync(struct vm_area_struct *vma, __flush_tlb_range_op(vae1is, start, pages, stride, asid, tlb_level, true, lpa2_is_enabled()); - mmu_notifier_arch_invalidate_secondary_tlbs(vma->vm_mm, start, end); + mmu_notifier_arch_invalidate_secondary_tlbs(mm, start, end); } static inline void __flush_tlb_range(struct vm_area_struct *vma, @@ -482,7 +475,7 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma, unsigned long stride, bool last_level, int tlb_level) { - __flush_tlb_range_nosync(vma, start, end, stride, + __flush_tlb_range_nosync(vma->vm_mm, start, end, stride, last_level, tlb_level); dsb(ish); } @@ -533,6 +526,15 @@ static inline void __flush_tlb_kernel_pgtable(unsigned long kaddr) dsb(ish); isb(); } + +static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, + struct mm_struct *mm, + unsigned long uaddr, + unsigned long size) +{ + __flush_tlb_range_nosync(mm, uaddr, uaddr + size, + PAGE_SIZE, true, 3); +} #endif #endif diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c index 55107d27d3f8..bcac4f55f9c1 100644 --- a/arch/arm64/mm/contpte.c +++ b/arch/arm64/mm/contpte.c @@ -335,7 +335,7 @@ int contpte_ptep_clear_flush_young(struct vm_area_struct *vma, * eliding the trailing DSB applies here. */ addr = ALIGN_DOWN(addr, CONT_PTE_SIZE); - __flush_tlb_range_nosync(vma, addr, addr + CONT_PTE_SIZE, + __flush_tlb_range_nosync(vma->vm_mm, addr, addr + CONT_PTE_SIZE, PAGE_SIZE, true, 3); } diff --git a/arch/riscv/include/asm/tlbflush.h b/arch/riscv/include/asm/tlbflush.h index 72e559934952..7f3ea687ce33 100644 --- a/arch/riscv/include/asm/tlbflush.h +++ b/arch/riscv/include/asm/tlbflush.h @@ -61,7 +61,8 @@ void flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start, bool arch_tlbbatch_should_defer(struct mm_struct *mm); void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, struct mm_struct *mm, - unsigned long uaddr); + unsigned long uaddr, + unsigned long size); void arch_flush_tlb_batched_pending(struct mm_struct *mm); void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch); diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c index 9b6e86ce3867..aeda64a36d50 100644 --- a/arch/riscv/mm/tlbflush.c +++ b/arch/riscv/mm/tlbflush.c @@ -187,7 +187,8 @@ bool arch_tlbbatch_should_defer(struct mm_struct *mm) void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, struct mm_struct *mm, - unsigned long uaddr) + unsigned long uaddr, + unsigned long size) { cpumask_or(&batch->cpumask, &batch->cpumask, mm_cpumask(mm)); } diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index 69e79fff41b8..4b62a6329b8f 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -279,7 +279,8 @@ static inline u64 inc_mm_tlb_gen(struct mm_struct *mm) static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, struct mm_struct *mm, - unsigned long uaddr) + unsigned long uaddr, + unsigned long size) { inc_mm_tlb_gen(mm); cpumask_or(&batch->cpumask, &batch->cpumask, mm_cpumask(mm)); diff --git a/mm/rmap.c b/mm/rmap.c index de6b8c34e98c..365112af5291 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -672,7 +672,8 @@ void try_to_unmap_flush_dirty(void) (TLB_FLUSH_BATCH_PENDING_MASK / 2) static void set_tlb_ubc_flush_pending(struct mm_struct *mm, pte_t pteval, - unsigned long uaddr) + unsigned long uaddr, + unsigned long size) { struct tlbflush_unmap_batch *tlb_ubc = ¤t->tlb_ubc; int batch; @@ -681,7 +682,7 @@ static void set_tlb_ubc_flush_pending(struct mm_struct *mm, pte_t pteval, if (!pte_accessible(mm, pteval)) return; - arch_tlbbatch_add_pending(&tlb_ubc->arch, mm, uaddr); + arch_tlbbatch_add_pending(&tlb_ubc->arch, mm, uaddr, size); tlb_ubc->flush_required = true; /* @@ -757,7 +758,8 @@ void flush_tlb_batched_pending(struct mm_struct *mm) } #else static void set_tlb_ubc_flush_pending(struct mm_struct *mm, pte_t pteval, - unsigned long uaddr) + unsigned long uaddr, + unsigned long size) { } @@ -1792,7 +1794,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, */ pteval = ptep_get_and_clear(mm, address, pvmw.pte); - set_tlb_ubc_flush_pending(mm, pteval, address); + set_tlb_ubc_flush_pending(mm, pteval, address, PAGE_SIZE); } else { pteval = ptep_clear_flush(vma, address, pvmw.pte); } @@ -2164,7 +2166,7 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, */ pteval = ptep_get_and_clear(mm, address, pvmw.pte); - set_tlb_ubc_flush_pending(mm, pteval, address); + set_tlb_ubc_flush_pending(mm, pteval, address, PAGE_SIZE); } else { pteval = ptep_clear_flush(vma, address, pvmw.pte); }