From patchwork Fri Oct 4 18:27:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathieu Desnoyers X-Patchwork-Id: 13822869 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D8FCFCF8857 for ; Fri, 4 Oct 2024 18:29:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9DA606B03B8; Fri, 4 Oct 2024 14:29:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 98AE06B03B7; Fri, 4 Oct 2024 14:29:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 766F96B03B6; Fri, 4 Oct 2024 14:29:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 4EB536B03B5 for ; Fri, 4 Oct 2024 14:29:47 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 06B9C41A79 for ; Fri, 4 Oct 2024 18:29:47 +0000 (UTC) X-FDA: 82636758414.11.FC849D1 Received: from smtpout.efficios.com (smtpout.efficios.com [167.114.26.122]) by imf09.hostedemail.com (Postfix) with ESMTP id 54D1D140015 for ; Fri, 4 Oct 2024 18:29:45 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=efficios.com header.s=smtpout1 header.b=eHkV0I+5; spf=pass (imf09.hostedemail.com: domain of mathieu.desnoyers@efficios.com designates 167.114.26.122 as permitted sender) smtp.mailfrom=mathieu.desnoyers@efficios.com; dmarc=pass (policy=none) header.from=efficios.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1728066454; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=6HTpkBAVmDcPmYGnZIT0puwrqRF0G6hYNGHmkKxh22k=; b=0P1t+nXBv+jtRf+TlCfYdUio7pK6VEv9d9loaakZNzsLGgsAbW+d2PegRkhv6vagoBsRz0 ITKuZcUP3VnSoivSZNEtVsm3azoY1fIzFDLzgqmbAYpwQskHptPmKBlorfGJ3jbUacTr0o IAWhhR8w2AzKNEAu6Uo5a/TY4Fx6ka8= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1728066454; a=rsa-sha256; cv=none; b=0im1LyV0q05ReZDHHJbXla4FUxP343Ln6GZ9OZSolmP7AEHZbnylQ2Ot53mf6v5G2xYgNV /2btGTOKcXcdJx0XVDBk5H0twrGN/6MThO3piA4RFUdeDPCEOYQ+tpBW9417FxkV47OuvJ TEicSgX9rgkHpmBg1nHxkJk0d5Ba/BQ= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=efficios.com header.s=smtpout1 header.b=eHkV0I+5; spf=pass (imf09.hostedemail.com: domain of mathieu.desnoyers@efficios.com designates 167.114.26.122 as permitted sender) smtp.mailfrom=mathieu.desnoyers@efficios.com; dmarc=pass (policy=none) header.from=efficios.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=efficios.com; s=smtpout1; t=1728066584; bh=ikHBNH3P9BwXJMOmXaMmaaS7+6QfYQ9cwLr7P2AyU9Q=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=eHkV0I+5RC66z5QFyzh00pg7N7PgyjwLrAUB6VMeftYGJL6kwPZRqdl0ti8ouphrH I14+kHvc8kolxd/m7ucd4RQiL/fFaq6lgFpHLyY0tZZtkVqw8Yc/Us1lwkdOtbMJ+A +c6BJsygMZsdTzQ/XpU6KOhosRIGVHjDR36Rhte6HFUrQRYQ++1eLD+aUcEkEHrL1M wHLOTofBXA5xc5iVfm3fcoy8nJRNwZY9DwkoEnM/PQr/BKQ3WO18Ukq9EGMoyAFIj0 /y4FSCZYw0zya+CsNgmto5g0SQxLlmyJ3bsbufHcP+4Ba/1HCW7vAxFXMxjRUczEtu ZB71z44OEFKgQ== Received: from thinkos.internal.efficios.com (96-127-217-162.qc.cable.ebox.net [96.127.217.162]) by smtpout.efficios.com (Postfix) with ESMTPSA id 4XKxsh1hRszNDs; Fri, 4 Oct 2024 14:29:44 -0400 (EDT) From: Mathieu Desnoyers To: Boqun Feng Cc: linux-kernel@vger.kernel.org, Mathieu Desnoyers , Linus Torvalds , Andrew Morton , Peter Zijlstra , Nicholas Piggin , Michael Ellerman , Greg Kroah-Hartman , Sebastian Andrzej Siewior , "Paul E. McKenney" , Will Deacon , Alan Stern , John Stultz , Neeraj Upadhyay , Frederic Weisbecker , Joel Fernandes , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Lai Jiangshan , Zqiang , Ingo Molnar , Waiman Long , Mark Rutland , Thomas Gleixner , Vlastimil Babka , maged.michael@gmail.com, Mateusz Guzik , Jonas Oberhauser , rcu@vger.kernel.org, linux-mm@kvack.org, lkmm@lists.linux.dev Subject: [RFC PATCH v2 4/4] sched+mm: Use hazard pointers to track lazy active mm existence Date: Fri, 4 Oct 2024 14:27:34 -0400 Message-Id: <20241004182734.1761555-5-mathieu.desnoyers@efficios.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20241004182734.1761555-1-mathieu.desnoyers@efficios.com> References: <20241004182734.1761555-1-mathieu.desnoyers@efficios.com> MIME-Version: 1.0 X-Stat-Signature: atm7fosid6r3f4xcosjnjymry8sgmnow X-Rspamd-Queue-Id: 54D1D140015 X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1728066585-200349 X-HE-Meta: U2FsdGVkX18l18JI0zVQS1GjdlOYOlRsdCMOilmpzKWvdqLwmvCOH4CD9vWGKKnoLtqikTEHpbT788l409mwkhLl0TWiJoKVc4K6WAm7cGTo5Eni2TSymo4Ja+gGi72eBE1NUnLBxiSBeLsbjkFSLERnRZXdIsxM6Osy9Ono18pLRYCywiyfVc5TnAE2qxBWnOrbHXBt3lfDIvy3x5eR/92zlVwCUjDIKC3EPhARZYSoVQYbklZddJeQG+NSw4sk9Y1HplJWAqesEGcJNCZ5AL1fEVUewqDMb0rmRVt7dVwjgvdCHsxxGAzCbyJpVw1EY8RviCkMv5mmEwl5gCtfZdM7NOYUjxFi1YOyfkqZ0Anm18eNjbvHX1nSYgYVrDcDeeYRgiBE4CztN0lF6SjmPQ8exQ2z7KTWWyxDnAt1WLqjprh15u4YzSqKmDxJ4WxzGVfM00jbrquKvYyUFalNORmYgRfCLaKK90HADiDS9nRMu8CUCWvLWZyC+UV3JdK6qXjgfUNWoXTjFK+EQ8Sky2cml+ykZOk6NTOn2JYQSYGfCrgmRcCTw5RCal/mAS+ftgVnKzKYyLsV423XQFzMpz2ceDMur7jW41dSOWXPizw9oA4SqZIexznSmpqXoCjVq3Jg/M+je02l0ZL84+a2LZDDZSDJnCAdUPxXDlin/XDfHdoXsvoN7wtNf9mF2OZMvY55NGSvRvFPsfa3bpKVeNiUitwzBYqRdN5iWeXyY1S3QrPfuXDKCTRJLmVBjp3QcmZUsVkgazORXg7KM0kMbQ7ZTAcEreQw8/+zPfitvHlIP2n+x3Htxy8ASotBzjesNlkvtm3EBRkJlloxwjU7neA6LZCwBChzVDA1/zAapaG9fK/lIrLUr1SjWxlpRNViS0MO4Ca8BQV3MvVzVA9F9o9i9lVhRl5YjzGX0DHRhkFW+bSFS5iKx0fk7J3e/zvk3SHlUymL4HwMkUc63LU Sh7aHuhD dC+IIJ6tv7Qn0GYCP9PBlJejoSDGfZSTbRB5ArtJNKsNp6txjonqClLlDoJ1R9zhelg0xwN0B8/wtuGvkcm3o6Xx5lupKTj8NVbafH2+J2wk0a6xtKgL84u9Hq65ceCv1f9gTTKPEZmOp7jdPXODvnOkrN6NhenUmnz7PDlbwAPJf4kjq1pbMvIytyyVeq27r3DHZ3VOYgAUFOFxBtm+YEd5Hbop3shRzQlS/T9dVfs8vOzikN112OooKl9JUJUO7lPUIvlVLx44cGtjSSPI6AuAP0lUcI6AG/b9PmntLRBhmisrc1OYwN1OQqvuzPK516NJAEg7VMj7uBN5n4l2fGjNcIQSPtn4a3vlCOAbvrrT2sUpsZWN0qaGPqyec1kGRld0YMpsd8mjBOJ964N5mr2O7/43wEnDPuHsZz4W89EaKTyMIBiSKPIhgY83MgC2U8sYIHXmx0dAp5Iq8ChR9c9GlMlf1V3shnh50sOm+5ue/xWp8rAs2HBf2a2VsNdDBqYsGcmb/aJ7ODAgc1FYnYyhxg4LuCAsCRYUVXp0+hDJFik/XM/FTKeFr1a3k7Aa4kYnNluE4wdKnuLwZHCVSeB0pWOrHSP4F8QRZWD2JyJC/OCahCm8j728o8cY7GDAHLzL/7BkHm71W6mjrfbvqV4HyF8iGZ2WnOALUGu48witAP/Oki9yrjsXHoEWMEWR5X1aoQK71if2sHc6jZre/mqCph59x+CElfoAvQq7ftQWrC3CzXlJIopGEuuSuR99f1BEkPO6b5ERPw5fWp4NybkGrpH7xNFFDA21gtkqx4tObWbFPZZfJu0mUbemcqQLd9pdZSQVNw7a+XXNxIkzkfsyCEvCZvB6+WQSPItULFSRiy/0oba54bPsxBIxE67XkZ72dqaz8D1ldQYFpltR8RrFV4SsHDdc7kVBq2a53s7T2rLEERirM9c2BmjlOzBF2iQmJq20jYTIw1OeZxQdlya79GCK5 rXEYI5Ai bRZ9u8R1E2I= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Replace lazy active mm existence tracking with hazard pointers. This removes the following implementations and their associated config options: - MMU_LAZY_TLB_REFCOUNT - MMU_LAZY_TLB_SHOOTDOWN - This removes the call_rcu delayed mm drop for RT. It leverages the fact that each CPU only ever have at most one single lazy active mm. This makes it a very good fit for a hazard pointer domain implemented with one hazard pointer slot per CPU. * Benchmarks: will-it-scale context_switch1_threads nr threads (-t) speedup 1 -0.2% 2 +0.4% 3 +0.2% 6 +0.6% 12 +0.8% 24 +3% 48 +12% 96 +21% 192 +28% 384 +4% 768 -0.6% Methodology: Each test is the average of 20 iterations. Use median result of 3 test runs. Test hardware: CPU(s): 384 On-line CPU(s) list: 0-383 Vendor ID: AuthenticAMD Model name: AMD EPYC 9654 96-Core Processor CPU family: 25 Model: 17 Thread(s) per core: 2 Core(s) per socket: 96 Socket(s): 2 Stepping: 1 Frequency boost: enabled CPU(s) scaling MHz: 100% CPU max MHz: 3709.0000 CPU min MHz: 400.0000 BogoMIPS: 4799.75 Memory: 768 GB ram. Signed-off-by: Mathieu Desnoyers Cc: Nicholas Piggin Cc: Michael Ellerman Cc: Greg Kroah-Hartman Cc: Sebastian Andrzej Siewior Cc: "Paul E. McKenney" Cc: Will Deacon Cc: Peter Zijlstra Cc: Boqun Feng Cc: Alan Stern Cc: John Stultz Cc: Neeraj Upadhyay Cc: Linus Torvalds Cc: Andrew Morton Cc: Boqun Feng Cc: Frederic Weisbecker Cc: Joel Fernandes Cc: Josh Triplett Cc: Uladzislau Rezki Cc: Steven Rostedt Cc: Lai Jiangshan Cc: Zqiang Cc: Ingo Molnar Cc: Waiman Long Cc: Mark Rutland Cc: Thomas Gleixner Cc: Vlastimil Babka Cc: maged.michael@gmail.com Cc: Mateusz Guzik Cc: Jonas Oberhauser Cc: rcu@vger.kernel.org Cc: linux-mm@kvack.org Cc: lkmm@lists.linux.dev --- Documentation/mm/active_mm.rst | 9 ++-- arch/Kconfig | 32 ------------- arch/powerpc/Kconfig | 1 - arch/powerpc/mm/book3s64/radix_tlb.c | 23 +-------- include/linux/mm_types.h | 3 -- include/linux/sched/mm.h | 71 +++++++++++----------------- kernel/exit.c | 4 +- kernel/fork.c | 47 +++++------------- kernel/sched/sched.h | 8 +--- lib/Kconfig.debug | 10 ---- 10 files changed, 49 insertions(+), 159 deletions(-) diff --git a/Documentation/mm/active_mm.rst b/Documentation/mm/active_mm.rst index d096fc091e23..c225cac49c30 100644 --- a/Documentation/mm/active_mm.rst +++ b/Documentation/mm/active_mm.rst @@ -2,11 +2,10 @@ Active MM ========= -Note, the mm_count refcount may no longer include the "lazy" users -(running tasks with ->active_mm == mm && ->mm == NULL) on kernels -with CONFIG_MMU_LAZY_TLB_REFCOUNT=n. Taking and releasing these lazy -references must be done with mmgrab_lazy_tlb() and mmdrop_lazy_tlb() -helpers, which abstract this config option. +Note, the mm_count refcount no longer include the "lazy" users (running +tasks with ->active_mm == mm && ->mm == NULL) Taking and releasing these +lazy references must be done with mmgrab_lazy_tlb() and mmdrop_lazy_tlb() +helpers, which are implemented with hazard pointers. :: diff --git a/arch/Kconfig b/arch/Kconfig index 975dd22a2dbd..d4261935f8dc 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -475,38 +475,6 @@ config ARCH_WANT_IRQS_OFF_ACTIVATE_MM irqs disabled over activate_mm. Architectures that do IPI based TLB shootdowns should enable this. -# Use normal mm refcounting for MMU_LAZY_TLB kernel thread references. -# MMU_LAZY_TLB_REFCOUNT=n can improve the scalability of context switching -# to/from kernel threads when the same mm is running on a lot of CPUs (a large -# multi-threaded application), by reducing contention on the mm refcount. -# -# This can be disabled if the architecture ensures no CPUs are using an mm as a -# "lazy tlb" beyond its final refcount (i.e., by the time __mmdrop frees the mm -# or its kernel page tables). This could be arranged by arch_exit_mmap(), or -# final exit(2) TLB flush, for example. -# -# To implement this, an arch *must*: -# Ensure the _lazy_tlb variants of mmgrab/mmdrop are used when manipulating -# the lazy tlb reference of a kthread's ->active_mm (non-arch code has been -# converted already). -config MMU_LAZY_TLB_REFCOUNT - def_bool y - depends on !MMU_LAZY_TLB_SHOOTDOWN - -# This option allows MMU_LAZY_TLB_REFCOUNT=n. It ensures no CPUs are using an -# mm as a lazy tlb beyond its last reference count, by shooting down these -# users before the mm is deallocated. __mmdrop() first IPIs all CPUs that may -# be using the mm as a lazy tlb, so that they may switch themselves to using -# init_mm for their active mm. mm_cpumask(mm) is used to determine which CPUs -# may be using mm as a lazy tlb mm. -# -# To implement this, an arch *must*: -# - At the time of the final mmdrop of the mm, ensure mm_cpumask(mm) contains -# at least all possible CPUs in which the mm is lazy. -# - It must meet the requirements for MMU_LAZY_TLB_REFCOUNT=n (see above). -config MMU_LAZY_TLB_SHOOTDOWN - bool - config ARCH_HAVE_NMI_SAFE_CMPXCHG bool diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index d7b09b064a8a..b1e25e75baab 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -291,7 +291,6 @@ config PPC select MMU_GATHER_PAGE_SIZE select MMU_GATHER_RCU_TABLE_FREE select MMU_GATHER_MERGE_VMAS - select MMU_LAZY_TLB_SHOOTDOWN if PPC_BOOK3S_64 select MODULES_USE_ELF_RELA select NEED_DMA_MAP_STATE if PPC64 || NOT_COHERENT_CACHE select NEED_PER_CPU_EMBED_FIRST_CHUNK if PPC64 diff --git a/arch/powerpc/mm/book3s64/radix_tlb.c b/arch/powerpc/mm/book3s64/radix_tlb.c index 9e1f6558d026..ff0d4f28cf52 100644 --- a/arch/powerpc/mm/book3s64/radix_tlb.c +++ b/arch/powerpc/mm/book3s64/radix_tlb.c @@ -1197,28 +1197,7 @@ void radix__tlb_flush(struct mmu_gather *tlb) * See the comment for radix in arch_exit_mmap(). */ if (tlb->fullmm) { - if (IS_ENABLED(CONFIG_MMU_LAZY_TLB_SHOOTDOWN)) { - /* - * Shootdown based lazy tlb mm refcounting means we - * have to IPI everyone in the mm_cpumask anyway soon - * when the mm goes away, so might as well do it as - * part of the final flush now. - * - * If lazy shootdown was improved to reduce IPIs (e.g., - * by batching), then it may end up being better to use - * tlbies here instead. - */ - preempt_disable(); - - smp_mb(); /* see radix__flush_tlb_mm */ - exit_flush_lazy_tlbs(mm); - __flush_all_mm(mm, true); - - preempt_enable(); - } else { - __flush_all_mm(mm, true); - } - + __flush_all_mm(mm, true); } else if ( (psize = radix_get_mmu_psize(page_size)) == -1) { if (!tlb->freed_tables) radix__flush_tlb_mm(mm); diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 485424979254..db5f13554485 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -975,9 +975,6 @@ struct mm_struct { atomic_t tlb_flush_batched; #endif struct uprobes_state uprobes_state; -#ifdef CONFIG_PREEMPT_RT - struct rcu_head delayed_drop; -#endif #ifdef CONFIG_HUGETLB_PAGE atomic_long_t hugetlb_usage; #endif diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h index 91546493c43d..0fecd1a3311d 100644 --- a/include/linux/sched/mm.h +++ b/include/linux/sched/mm.h @@ -9,6 +9,10 @@ #include #include #include +#include + +/* Sched lazy mm hazard pointer domain. */ +DECLARE_PER_CPU(struct hp_slot, hp_domain_sched_lazy_mm); /* * Routines for handling mm_structs @@ -55,61 +59,42 @@ static inline void mmdrop(struct mm_struct *mm) __mmdrop(mm); } -#ifdef CONFIG_PREEMPT_RT -/* - * RCU callback for delayed mm drop. Not strictly RCU, but call_rcu() is - * by far the least expensive way to do that. - */ -static inline void __mmdrop_delayed(struct rcu_head *rhp) +/* Helpers for lazy TLB mm refcounting */ +static inline void mmgrab_lazy_tlb(struct mm_struct *mm) { - struct mm_struct *mm = container_of(rhp, struct mm_struct, delayed_drop); + struct hp_ctx ctx; - __mmdrop(mm); -} + /* + * mmgrab_lazy_tlb must provide a full memory barrier, see the + * membarrier comment finish_task_switch which relies on this. + */ + smp_mb(); -/* - * Invoked from finish_task_switch(). Delegates the heavy lifting on RT - * kernels via RCU. - */ -static inline void mmdrop_sched(struct mm_struct *mm) -{ - /* Provides a full memory barrier. See mmdrop() */ - if (atomic_dec_and_test(&mm->mm_count)) - call_rcu(&mm->delayed_drop, __mmdrop_delayed); -} -#else -static inline void mmdrop_sched(struct mm_struct *mm) -{ - mmdrop(mm); -} -#endif + /* + * The caller guarantees existence of mm. Allocate a hazard + * pointer to chain this existence guarantee to a hazard + * pointer. + */ + ctx = hp_allocate(&hp_domain_sched_lazy_mm, mm); -/* Helpers for lazy TLB mm refcounting */ -static inline void mmgrab_lazy_tlb(struct mm_struct *mm) -{ - if (IS_ENABLED(CONFIG_MMU_LAZY_TLB_REFCOUNT)) - mmgrab(mm); + /* There is only a single lazy mm per CPU at any time. */ + WARN_ON_ONCE(!hp_ctx_addr(ctx)); } static inline void mmdrop_lazy_tlb(struct mm_struct *mm) { - if (IS_ENABLED(CONFIG_MMU_LAZY_TLB_REFCOUNT)) { - mmdrop(mm); - } else { - /* - * mmdrop_lazy_tlb must provide a full memory barrier, see the - * membarrier comment finish_task_switch which relies on this. - */ - smp_mb(); - } + /* + * mmdrop_lazy_tlb must provide a full memory barrier, see the + * membarrier comment finish_task_switch which relies on this. + */ + smp_mb(); + WRITE_ONCE(this_cpu_ptr(&hp_domain_sched_lazy_mm)->addr, NULL); } static inline void mmdrop_lazy_tlb_sched(struct mm_struct *mm) { - if (IS_ENABLED(CONFIG_MMU_LAZY_TLB_REFCOUNT)) - mmdrop_sched(mm); - else - smp_mb(); /* see mmdrop_lazy_tlb() above */ + smp_mb(); /* see mmdrop_lazy_tlb() above */ + WRITE_ONCE(this_cpu_ptr(&hp_domain_sched_lazy_mm)->addr, NULL); } /** diff --git a/kernel/exit.c b/kernel/exit.c index 7430852a8571..cb4ace06c0f0 100644 --- a/kernel/exit.c +++ b/kernel/exit.c @@ -545,8 +545,6 @@ static void exit_mm(void) if (!mm) return; mmap_read_lock(mm); - mmgrab_lazy_tlb(mm); - BUG_ON(mm != current->active_mm); /* more a memory barrier than a real lock */ task_lock(current); /* @@ -561,6 +559,8 @@ static void exit_mm(void) */ smp_mb__after_spinlock(); local_irq_disable(); + mmgrab_lazy_tlb(mm); + BUG_ON(mm != current->active_mm); current->mm = NULL; membarrier_update_current_mm(NULL); enter_lazy_tlb(mm, current); diff --git a/kernel/fork.c b/kernel/fork.c index cc760491f201..42c652ec39b5 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -149,6 +149,9 @@ DEFINE_PER_CPU(unsigned long, process_counts) = 0; __cacheline_aligned DEFINE_RWLOCK(tasklist_lock); /* outer */ +/* Sched lazy mm hazard pointer domain. */ +DEFINE_PER_CPU(struct hp_slot, hp_domain_sched_lazy_mm); + #ifdef CONFIG_PROVE_RCU int lockdep_tasklist_lock_is_held(void) { @@ -855,50 +858,24 @@ static void do_shoot_lazy_tlb(void *arg) WARN_ON_ONCE(current->mm); current->active_mm = &init_mm; switch_mm(mm, &init_mm, current); + WRITE_ONCE(this_cpu_ptr(&hp_domain_sched_lazy_mm)->addr, NULL); } } -static void cleanup_lazy_tlbs(struct mm_struct *mm) +static void retire_lazy_mm_hp(int cpu, struct hp_slot *slot, void *addr) { - if (!IS_ENABLED(CONFIG_MMU_LAZY_TLB_SHOOTDOWN)) { - /* - * In this case, lazy tlb mms are refounted and would not reach - * __mmdrop until all CPUs have switched away and mmdrop()ed. - */ - return; - } + smp_call_function_single(cpu, do_shoot_lazy_tlb, addr, 1); + smp_call_function_single(cpu, do_check_lazy_tlb, addr, 1); +} +static void cleanup_lazy_tlbs(struct mm_struct *mm) +{ /* - * Lazy mm shootdown does not refcount "lazy tlb mm" usage, rather it - * requires lazy mm users to switch to another mm when the refcount + * Require lazy mm users to switch to another mm when the refcount * drops to zero, before the mm is freed. This requires IPIs here to * switch kernel threads to init_mm. - * - * archs that use IPIs to flush TLBs can piggy-back that lazy tlb mm - * switch with the final userspace teardown TLB flush which leaves the - * mm lazy on this CPU but no others, reducing the need for additional - * IPIs here. There are cases where a final IPI is still required here, - * such as the final mmdrop being performed on a different CPU than the - * one exiting, or kernel threads using the mm when userspace exits. - * - * IPI overheads have not found to be expensive, but they could be - * reduced in a number of possible ways, for example (roughly - * increasing order of complexity): - * - The last lazy reference created by exit_mm() could instead switch - * to init_mm, however it's probable this will run on the same CPU - * immediately afterwards, so this may not reduce IPIs much. - * - A batch of mms requiring IPIs could be gathered and freed at once. - * - CPUs store active_mm where it can be remotely checked without a - * lock, to filter out false-positives in the cpumask. - * - After mm_users or mm_count reaches zero, switching away from the - * mm could clear mm_cpumask to reduce some IPIs, perhaps together - * with some batching or delaying of the final IPIs. - * - A delayed freeing and RCU-like quiescing sequence based on mm - * switching to avoid IPIs completely. */ - on_each_cpu_mask(mm_cpumask(mm), do_shoot_lazy_tlb, (void *)mm, 1); - if (IS_ENABLED(CONFIG_DEBUG_VM_SHOOT_LAZIES)) - on_each_cpu(do_check_lazy_tlb, (void *)mm, 1); + hp_scan(&hp_domain_sched_lazy_mm, mm, retire_lazy_mm_hp); } /* diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 4c36cc680361..d883c2aa3518 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -3527,12 +3527,8 @@ static inline void switch_mm_cid(struct rq *rq, if (!next->mm) { // to kernel /* * user -> kernel transition does not guarantee a barrier, but - * we can use the fact that it performs an atomic operation in - * mmgrab(). - */ - if (prev->mm) // from user - smp_mb__after_mmgrab(); - /* + * we can use the fact that mmgrab() has a full barrier. + * * kernel -> kernel transition does not change rq->curr->mm * state. It stays NULL. */ diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index a30c03a66172..1cb9dab361c9 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -803,16 +803,6 @@ config DEBUG_VM If unsure, say N. -config DEBUG_VM_SHOOT_LAZIES - bool "Debug MMU_LAZY_TLB_SHOOTDOWN implementation" - depends on DEBUG_VM - depends on MMU_LAZY_TLB_SHOOTDOWN - help - Enable additional IPIs that ensure lazy tlb mm references are removed - before the mm is freed. - - If unsure, say N. - config DEBUG_VM_MAPLE_TREE bool "Debug VM maple trees" depends on DEBUG_VM