From patchwork Wed Feb 5 01:39:59 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rik van Riel X-Patchwork-Id: 13960413 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1E02C02193 for ; Wed, 5 Feb 2025 01:42:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D2F86280031; Tue, 4 Feb 2025 20:42:02 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CB645280029; Tue, 4 Feb 2025 20:42:02 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B5573280031; Tue, 4 Feb 2025 20:42:02 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 979B9280029 for ; Tue, 4 Feb 2025 20:42:02 -0500 (EST) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 5123AC05DF for ; Wed, 5 Feb 2025 01:42:02 +0000 (UTC) X-FDA: 83084190084.25.9E06255 Received: from shelob.surriel.com (shelob.surriel.com [96.67.55.147]) by imf04.hostedemail.com (Postfix) with ESMTP id CDFA140004 for ; Wed, 5 Feb 2025 01:42:00 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf04.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738719720; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jF+03qR1ZAoTeFkcz4dgiKIgM8Ua8410RffZlkziHnQ=; b=Jl890UrhunwB20tNM7f4TI9giAVAaO/gxG5inqLA+juPqHY3wmq1nOU9cz/4nkqWMY/6Hg kJOfSUjpxhM/jwvJ9v8pbXy/HKT8gfWSPOOH6hQnU+5FyxJY4UJq01XwtDTD9lUeh81D6W 2PzDbJmBirsiGSs+I1M6Epjt/oBJV4M= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738719720; a=rsa-sha256; cv=none; b=G8l4JPH6j34xJ6ti3dRc3TSHSi7D9pX2HvLTtos/yK3gwe7tzWAvXEXRFXwwqnuWPkyoas /4LQcwhB5uaP2F/IfWaHlyqMoKbY2ouagryD1r8Q9/rLTMmf6wfUvOvrVZd79g68zylaQG D1M/eLyPGjieDqwJz6JDeMAjd6C+dHU= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf04.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com Received: from fangorn.home.surriel.com ([10.0.13.7]) by shelob.surriel.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.97.1) (envelope-from ) id 1tfUP4-000000004Cs-3JEC; Tue, 04 Feb 2025 20:40:34 -0500 From: Rik van Riel To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, bp@alien8.de, peterz@infradead.org, dave.hansen@linux.intel.com, zhengqi.arch@bytedance.com, nadav.amit@gmail.com, thomas.lendacky@amd.com, kernel-team@meta.com, linux-mm@kvack.org, akpm@linux-foundation.org, jannh@google.com, mhklinux@outlook.com, andrew.cooper3@citrix.com, Rik van Riel , Manali Shukla Subject: [PATCH v8 10/12] x86/mm: do targeted broadcast flushing from tlbbatch code Date: Tue, 4 Feb 2025 20:39:59 -0500 Message-ID: <20250205014033.3626204-11-riel@surriel.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250205014033.3626204-1-riel@surriel.com> References: <20250205014033.3626204-1-riel@surriel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: CDFA140004 X-Stat-Signature: mhg9aqdt7ehrk1bwnt5h4c65k51oiues X-Rspam-User: X-Rspamd-Server: rspam12 X-HE-Tag: 1738719720-36247 X-HE-Meta: U2FsdGVkX18JIEWQXJdUrw1N1gi7gHbat4y3an0Pp4kTOsl8w1uLgTNrqvfbxobD/QjjxBUzBscuZGYGQg07p1poV/EusUgMEe1XY9SkGYNl+l7XpMg7yrjpv651Ev4Mjqs6Ou45IJIWesfNTYT7cPXmA6CoNR8stzFd1ZDd9WwGlbE0C34dm5lY6luTRRLfy4CvXXruJJjcuVIJKA4PE9aAB6cOfHEWqfIBzcthrsDwh4K1+xDO/w2cX/Y3m3cBnqWgDz+5pijypWnDGy8o7st+CsuAAIIGl7c2tJ4K76dJhnRJmcr0i/MOAROqWKqRzZ/Tb3Q+N/w2hxtvRotjX4hc4f8Jw2VMt7bt7nr0FWdR9tnQkBRYPdOf/zke2284P57VuOeuLS6hcnqTzAdUXjHC96Co4zY2tEXejjKuqaiUegT2s9SzytvPa+cEvOLm/vjZ2eWDTqD4HEBla8i87C921SFTPjYq2lZudSUIRn+/Gov3A73kMMNWG6aZ7nb7E4jJR0JBx72W1p4Wbw48afEYLAkky5/dPXJ9d/l+znfF7yHZJ7/imSwFomJd7iNnzlPDyXafYW+mKMcZKDLK+8Ekeh6prFQU0l8wYyJjzfrvfpUx0v2fZ3vPz1UOpLcZFh+3eSqr8DztXRASguDvlMGXZ8a7dnoQXL0Bd9/jwkS/vhnuEoBj5ySQlqB0VGReAY1ddEXo3um/faMYPGwv5CyX043tfUIMyxkJdII8RUglM0ro1UeK6Se8MaQ3t5D/AHoXISR5sOWkUgVAx6gfat6sybuv9yFCesEi8PFO1nwAQnNNSGnMLvCENd4TB0s6jcvorHytm9Eim9PHbzijFNclUuKElT3WZAGAtoyJDtjpBv3Q7PsoaUV37yL8RVRyy7/KM6kMmU3W0NFqXlfuez4k+dLcV6FBN6vShi+o4BEVk+pP1X41OF5kQgox/0SX8WUjobfIKNIaN2jbK3Z dRL811Sn fqPzdl/hyGbanlYVkYIRcU9/Wgfu8bRZR4xBu+r/uKK9vRRFK4dajBrSNJ3PbErM+0JrBmNyGRaaxcxBdSiTOHgJlO1rlh8HtiA1fX7bm+9fUpJf4nNgzNRlraG3SmeFII79plCs1wM8B4n0lPwn9Q0BF6A2D+TOquMP0nwMp8N+CgQQ+tTQN2SCC556C8oajdDU+obFWJbD5Q64IpeuZamm/y9c3b7FqXcK+BsbOTp5yaquqBGtShgj4kvtIZo9P2ip3u3FWKpyErrUqrgkCnvg3JAXVgl/I1nN9KLGjzwGXaepVdqn17iNlME1KgVL8EATrwyHLtuy6jan7sN/G4sx8b9JgvRX/3kmUOnIb5kc9JOGnHtHh7eh9B0AUy7cAL9/AZ6exqy2t5HgNuMx+ZPXbqIVMPJSoWvfC3cRAKlvLGEysMZzQWKGwETJvkguuoxPKvprOoyB0hYDi7d/H8TCyVflp8WQTJxpocSveuopVI7n4+oIP/2xlQw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Instead of doing a system-wide TLB flush from arch_tlbbatch_flush, queue up asynchronous, targeted flushes from arch_tlbbatch_add_pending. This also allows us to avoid adding the CPUs of processes using broadcast flushing to the batch->cpumask, and will hopefully further reduce TLB flushing from the reclaim and compaction paths. Signed-off-by: Rik van Riel Tested-by: Manali Shukla --- arch/x86/include/asm/tlbbatch.h | 1 + arch/x86/include/asm/tlbflush.h | 12 ++----- arch/x86/mm/tlb.c | 57 +++++++++++++++++++++++++++++++-- 3 files changed, 58 insertions(+), 12 deletions(-) diff --git a/arch/x86/include/asm/tlbbatch.h b/arch/x86/include/asm/tlbbatch.h index 1ad56eb3e8a8..f9a17edf63ad 100644 --- a/arch/x86/include/asm/tlbbatch.h +++ b/arch/x86/include/asm/tlbbatch.h @@ -10,6 +10,7 @@ struct arch_tlbflush_unmap_batch { * the PFNs being flushed.. */ struct cpumask cpumask; + bool used_invlpgb; }; #endif /* _ARCH_X86_TLBBATCH_H */ diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index 7e2f3f7f6455..f8aaa4bcb4d8 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -359,21 +359,15 @@ static inline u64 inc_mm_tlb_gen(struct mm_struct *mm) return atomic64_inc_return(&mm->context.tlb_gen); } -static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, - struct mm_struct *mm, - unsigned long uaddr) -{ - inc_mm_tlb_gen(mm); - cpumask_or(&batch->cpumask, &batch->cpumask, mm_cpumask(mm)); - mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL); -} - static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm) { flush_tlb_mm(mm); } extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch); +extern void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, + struct mm_struct *mm, + unsigned long uaddr); static inline bool pte_flags_need_flush(unsigned long oldflags, unsigned long newflags, diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 7b363ae1569b..c064e27df1f3 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -1646,9 +1646,7 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch) * a local TLB flush is needed. Optimize this use-case by calling * flush_tlb_func_local() directly in this case. */ - if (cpu_feature_enabled(X86_FEATURE_INVLPGB)) { - invlpgb_flush_all_nonglobals(); - } else if (cpumask_any_but(&batch->cpumask, cpu) < nr_cpu_ids) { + if (cpumask_any_but(&batch->cpumask, cpu) < nr_cpu_ids) { flush_tlb_multi(&batch->cpumask, info); } else if (cpumask_test_cpu(cpu, &batch->cpumask)) { lockdep_assert_irqs_enabled(); @@ -1657,12 +1655,65 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch) local_irq_enable(); } + /* + * If we issued (asynchronous) INVLPGB flushes, wait for them here. + * The cpumask above contains only CPUs that were running tasks + * not using broadcast TLB flushing. + */ + if (cpu_feature_enabled(X86_FEATURE_INVLPGB) && batch->used_invlpgb) { + tlbsync(); + migrate_enable(); + batch->used_invlpgb = false; + } + cpumask_clear(&batch->cpumask); put_flush_tlb_info(); put_cpu(); } +void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, + struct mm_struct *mm, + unsigned long uaddr) +{ + u16 asid = mm_global_asid(mm); + + if (asid) { + /* + * Queue up an asynchronous invalidation. The corresponding + * TLBSYNC is done in arch_tlbbatch_flush(), and must be done + * on the same CPU. + */ + if (!batch->used_invlpgb) { + batch->used_invlpgb = true; + migrate_disable(); + } + invlpgb_flush_user_nr_nosync(kern_pcid(asid), uaddr, 1, false); + /* Do any CPUs supporting INVLPGB need PTI? */ + if (static_cpu_has(X86_FEATURE_PTI)) + invlpgb_flush_user_nr_nosync(user_pcid(asid), uaddr, 1, false); + + /* + * Some CPUs might still be using a local ASID for this + * process, and require IPIs, while others are using the + * global ASID. + * + * In this corner case we need to do both the broadcast + * TLB invalidation, and send IPIs. The IPIs will help + * stragglers transition to the broadcast ASID. + */ + if (READ_ONCE(mm->context.asid_transition)) + asid = 0; + } + + if (!asid) { + inc_mm_tlb_gen(mm); + cpumask_or(&batch->cpumask, &batch->cpumask, mm_cpumask(mm)); + } + + mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL); +} + /* * Blindly accessing user memory from NMI context can be dangerous * if we're in the middle of switching the current user task or