From patchwork Thu Jul 20 16:30:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Valentin Schneider X-Patchwork-Id: 13320804 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D1C1EB64DA for ; Thu, 20 Jul 2023 16:34:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EF9B1280142; Thu, 20 Jul 2023 12:34:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EAA2028004C; Thu, 20 Jul 2023 12:34:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D23F8280142; Thu, 20 Jul 2023 12:34:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id BF2A828004C for ; Thu, 20 Jul 2023 12:34:48 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 80C3E1401BC for ; Thu, 20 Jul 2023 16:34:48 +0000 (UTC) X-FDA: 81032539056.18.EEBE333 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf09.hostedemail.com (Postfix) with ESMTP id 7B04B140009 for ; Thu, 20 Jul 2023 16:34:46 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=UjbraSCb; spf=pass (imf09.hostedemail.com: domain of vschneid@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=vschneid@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1689870886; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=amrP88Jqmb7M02bSDE9mFh1MFKi+V3yBx6dizqUgPII=; b=nApdbU1CwyKd8amFxN50Gri6pZE8tNoCTISBCO+PVEj0H54gk4KDUQgbav61gBpCHEWQns xCMhEWvWr15aazjuDEInU+gRuWKFtj2zF5CFK+waQAy/Un3Dvr1kB0WZDZ09k1yUpGxXCL vzXuHO+/et+biK2vcQD4paKbXXg9Wck= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1689870886; a=rsa-sha256; cv=none; b=ZFEUSaEVsOPQOEFh1jPmLkPXLm/O+QADgRpeL0lpcU2BLCRbZ1B859IJTBdXPc5WfX2Jkx Ro/SvheVKw9AndDYVp30v8YvGQkSkKspUMRWcfHYwIoQCHh/HROFzV6hS8fVjUImSfwCku bjYvYLxHV3tiapH0Z5/gNYfiiX4i64c= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=UjbraSCb; spf=pass (imf09.hostedemail.com: domain of vschneid@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=vschneid@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1689870885; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=amrP88Jqmb7M02bSDE9mFh1MFKi+V3yBx6dizqUgPII=; b=UjbraSCbDOty3zOeVEO4DScvsOpidest5WvzhUmstwfJexz/Z0FqW3HAyJ4Akgjd/4pwzH JnK8/MDfl+oi17pWZfwT2xLAVx7FpeS7GHra6ExpUe8PIs5r8hy+QV2MOQcdsrDZS98x6U uriK15V4SZISg6QK2S61Fj5C51zSqx0= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-613-tiaKu1LnPgWKO7yNg3Rgug-1; Thu, 20 Jul 2023 12:34:43 -0400 X-MC-Unique: tiaKu1LnPgWKO7yNg3Rgug-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BF946800B35; Thu, 20 Jul 2023 16:34:40 +0000 (UTC) Received: from vschneid.remote.csb (unknown [10.42.28.48]) by smtp.corp.redhat.com (Postfix) with ESMTPS id C55AF40C206F; Thu, 20 Jul 2023 16:34:32 +0000 (UTC) From: Valentin Schneider To: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, bpf@vger.kernel.org, x86@kernel.org, rcu@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: Steven Rostedt , Masami Hiramatsu , Jonathan Corbet , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Paolo Bonzini , Wanpeng Li , Vitaly Kuznetsov , Andy Lutomirski , Peter Zijlstra , Frederic Weisbecker , "Paul E. McKenney" , Neeraj Upadhyay , Joel Fernandes , Josh Triplett , Boqun Feng , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Andrew Morton , Uladzislau Rezki , Christoph Hellwig , Lorenzo Stoakes , Josh Poimboeuf , Jason Baron , Kees Cook , Sami Tolvanen , Ard Biesheuvel , Nicholas Piggin , Juerg Haefliger , Nicolas Saenz Julienne , "Kirill A. Shutemov" , Nadav Amit , Dan Carpenter , Chuang Wang , Yang Jihong , Petr Mladek , "Jason A. Donenfeld" , Song Liu , Julian Pidancet , Tom Lendacky , Dionna Glaze , =?utf-8?q?Thomas_Wei=C3=9Fschuh?= , Juri Lelli , Daniel Bristot de Oliveira , Marcelo Tosatti , Yair Podemsky Subject: [RFC PATCH v2 20/20] x86/mm, mm/vmalloc: Defer flush_tlb_kernel_range() targeting NOHZ_FULL CPUs Date: Thu, 20 Jul 2023 17:30:56 +0100 Message-Id: <20230720163056.2564824-21-vschneid@redhat.com> In-Reply-To: <20230720163056.2564824-1-vschneid@redhat.com> References: <20230720163056.2564824-1-vschneid@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 X-Rspamd-Queue-Id: 7B04B140009 X-Rspam-User: X-Stat-Signature: gh1jato1p35tb737pzxi85k87uua1p1g X-Rspamd-Server: rspam03 X-HE-Tag: 1689870886-333517 X-HE-Meta: U2FsdGVkX1/1jz1GM2wGlUVW9mQ3YbyNxfMbNEb4L6zo7GTOkCN0gbViNBXKRb6tPaey5XSfQJiIYnLWTNUSXifi8HTWlIDfflRLB9nvdLhud/zmiwAu55CRt324hNxPQyqZlozii9A3ZSvoWV5AGnwpSsWKsIt9Att3IANQdPQXHXAzXPG/Lk5a+R4o+QregfZoLpzg10blthk7NQ/6VZJByJflTHO5+n5Sw4rF5OOsHIv1cnA1ae3doKjGnus8HcaWF6OJavxR/01z7vVK7PGfZ3BK1Icz61Dk0IlI0L0tLqa0v1Zv3WlrCivs7ZOpxcbNq+RiiTaH52jgCDXnrQK4LdPLdRbEHl9T9AzJuogYxK2M48VBeCXTzqs8o+QDBU2Zr8zFvW4ogiBg/mwTm3Fo9ePfTzGJInr3whP+b89XEo6egEy0LHuKnVXPsT724fwCsr6jvUdVjfABddOnoRZjns0zD5honq1TnbAZq5zw2SrO9tu5SP4QzaBNQdNIywcN+p+DqmnYxUyXxNjfO8oTNM+B5w5XSgwN9lOg0di9k2kShhcZkGGRk3nsbhl8pOn17yxaVn2tNfKzkSpFtQTJKseVCmmmUh6vXN/87e9lgZAiDsPAOmHa+aPBWAugOlrks7qTWbnbullPlVhLz9/vMT/aKc7I3w9tX4hyCM9bHplGpURyoet/v1WPIT8vbad3Wx9hQyLOcvklJ4Xl5dBPxAOHKs5fWcy6LFVIenkbdbSfLo5YiqAd20k7X8NxEf1C/qKzcCb9oUmnQpOmCzDOfiUbTZYG8VSsbQ2bRa5EeEYlN4i+ToJJUQqdqMFVvuAFTI4h4HeCn6wTaDtNMRo8W5f3dd+NjeeE5f1urAPGREXNbwegO7ogcxXhfpE2fG3zjerzZ2dbCr9c4cyk/aHmnXbiSpaekKjfMbbHM88U1tnjECiLMvj3llzvoj1RyciRWg9TrFf8RBWi1H0 grqhS7FU MijtFplihXcI0t6+6h9n7oDFYZbbA+renlEaS6vRsstRpl3qZEl5e+JmJnYhUyo2vwDszWbVYGHEJKXcF8JIswAbiQFvTxRAh8SGv+YwwZ1LqjI34xalAvIMLuUrQeFoJOZWO6LSmqRqXspDldtc9qg0O9YoLzn/SccDmLHiv71qdBSbVEEtwqRZ28I/djH3QmJI31/HZSARca2pZIR7N/ScI8cUkMkli2Sn+XivDHpaboHNfFxiIWT19j7iCvLlWEwskTrUWQGc/i4v3G6yDOG4PzKcNQ9EkewM4 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: vunmap()'s issued from housekeeping CPUs are a relatively common source of interference for isolated NOHZ_FULL CPUs, as they are hit by the flush_tlb_kernel_range() IPIs. Given that CPUs executing in userspace do not access data in the vmalloc range, these IPIs could be deferred until their next kernel entry. This does require a guarantee that nothing in the vmalloc range can be accessed in early entry code. vmalloc'd kernel stacks (VMAP_STACK) are AFAICT a safe exception, as a task running in userspace needs to enter kernelspace to execute do_exit() before its stack can be vfree'd. XXX: Validation that nothing in the vmalloc range is accessed in .noinstr or somesuch? Blindly deferring any and all flush of the kernel mappings is a risky move, so introduce a variant of flush_tlb_kernel_range() that explicitly allows deferral. Use it for vunmap flushes. Note that while flush_tlb_kernel_range() may end up issuing a full flush (including user mappings), this only happens when reaching a invalidation range threshold where it is cheaper to do a full flush than to individually invalidate each page in the range via INVLPG. IOW, it doesn't *require* invalidating user mappings, and thus remains safe to defer until a later kernel entry. Signed-off-by: Valentin Schneider --- arch/x86/include/asm/tlbflush.h | 1 + arch/x86/mm/tlb.c | 23 ++++++++++++++++++++--- mm/vmalloc.c | 19 ++++++++++++++----- 3 files changed, 35 insertions(+), 8 deletions(-) diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index 323b971987af7..0b9b1f040c476 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -248,6 +248,7 @@ extern void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, unsigned long end, unsigned int stride_shift, bool freed_tables); extern void flush_tlb_kernel_range(unsigned long start, unsigned long end); +extern void flush_tlb_kernel_range_deferrable(unsigned long start, unsigned long end); static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long a) { diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 631df9189ded4..bb18b35e61b4a 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include @@ -1045,6 +1046,11 @@ static void do_flush_tlb_all(void *info) __flush_tlb_all(); } +static bool do_kernel_flush_defer_cond(int cpu, void *info) +{ + return !ct_set_cpu_work(cpu, CONTEXT_WORK_TLBI); +} + void flush_tlb_all(void) { count_vm_tlb_event(NR_TLB_REMOTE_FLUSH); @@ -1061,12 +1067,13 @@ static void do_kernel_range_flush(void *info) flush_tlb_one_kernel(addr); } -void flush_tlb_kernel_range(unsigned long start, unsigned long end) +static inline void +__flush_tlb_kernel_range(smp_cond_func_t cond_func, unsigned long start, unsigned long end) { /* Balance as user space task's flush, a bit conservative */ if (end == TLB_FLUSH_ALL || (end - start) > tlb_single_page_flush_ceiling << PAGE_SHIFT) { - on_each_cpu(do_flush_tlb_all, NULL, 1); + on_each_cpu_cond(cond_func, do_flush_tlb_all, NULL, 1); } else { struct flush_tlb_info *info; @@ -1074,13 +1081,23 @@ void flush_tlb_kernel_range(unsigned long start, unsigned long end) info = get_flush_tlb_info(NULL, start, end, 0, false, TLB_GENERATION_INVALID); - on_each_cpu(do_kernel_range_flush, info, 1); + on_each_cpu_cond(cond_func, do_kernel_range_flush, info, 1); put_flush_tlb_info(); preempt_enable(); } } +void flush_tlb_kernel_range(unsigned long start, unsigned long end) +{ + __flush_tlb_kernel_range(NULL, start, end); +} + +void flush_tlb_kernel_range_deferrable(unsigned long start, unsigned long end) +{ + __flush_tlb_kernel_range(do_kernel_flush_defer_cond, start, end); +} + /* * This can be used from process context to figure out what the value of * CR3 is without needing to do a (slow) __read_cr3(). diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 93cf99aba335b..e08b6c7d22fb6 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -439,6 +439,15 @@ void vunmap_range_noflush(unsigned long start, unsigned long end) __vunmap_range_noflush(start, end); } +#ifdef CONFIG_CONTEXT_TRACKING_WORK +void __weak flush_tlb_kernel_range_deferrable(unsigned long start, unsigned long end) +{ + flush_tlb_kernel_range(start, end); +} +#else +#define flush_tlb_kernel_range_deferrable(start, end) flush_tlb_kernel_range(start, end) +#endif + /** * vunmap_range - unmap kernel virtual addresses * @addr: start of the VM area to unmap @@ -452,7 +461,7 @@ void vunmap_range(unsigned long addr, unsigned long end) { flush_cache_vunmap(addr, end); vunmap_range_noflush(addr, end); - flush_tlb_kernel_range(addr, end); + flush_tlb_kernel_range_deferrable(addr, end); } static int vmap_pages_pte_range(pmd_t *pmd, unsigned long addr, @@ -1746,7 +1755,7 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end) list_last_entry(&local_purge_list, struct vmap_area, list)->va_end); - flush_tlb_kernel_range(start, end); + flush_tlb_kernel_range_deferrable(start, end); resched_threshold = lazy_max_pages() << 1; spin_lock(&free_vmap_area_lock); @@ -1849,7 +1858,7 @@ static void free_unmap_vmap_area(struct vmap_area *va) flush_cache_vunmap(va->va_start, va->va_end); vunmap_range_noflush(va->va_start, va->va_end); if (debug_pagealloc_enabled_static()) - flush_tlb_kernel_range(va->va_start, va->va_end); + flush_tlb_kernel_range_deferrable(va->va_start, va->va_end); free_vmap_area_noflush(va); } @@ -2239,7 +2248,7 @@ static void vb_free(unsigned long addr, unsigned long size) vunmap_range_noflush(addr, addr + size); if (debug_pagealloc_enabled_static()) - flush_tlb_kernel_range(addr, addr + size); + flush_tlb_kernel_range_deferrable(addr, addr + size); spin_lock(&vb->lock); @@ -2304,7 +2313,7 @@ static void _vm_unmap_aliases(unsigned long start, unsigned long end, int flush) free_purged_blocks(&purge_list); if (!__purge_vmap_area_lazy(start, end) && flush) - flush_tlb_kernel_range(start, end); + flush_tlb_kernel_range_deferrable(start, end); mutex_unlock(&vmap_purge_lock); }