From patchwork Tue Feb 11 21:07:56 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rik van Riel X-Patchwork-Id: 13970698 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81683C0219B for ; Tue, 11 Feb 2025 21:09:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 577B7280002; Tue, 11 Feb 2025 16:09:08 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4D80C6B0098; Tue, 11 Feb 2025 16:09:08 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 109EF280001; Tue, 11 Feb 2025 16:09:08 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id BA0376B0093 for ; Tue, 11 Feb 2025 16:09:07 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 6036E4A646 for ; Tue, 11 Feb 2025 21:09:07 +0000 (UTC) X-FDA: 83108903934.19.4A63AA7 Received: from shelob.surriel.com (shelob.surriel.com [96.67.55.147]) by imf01.hostedemail.com (Postfix) with ESMTP id C36534000D for ; Tue, 11 Feb 2025 21:09:05 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf01.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1739308145; a=rsa-sha256; cv=none; b=hg31YL8Gv9Nq/4CUz/8F7h7ayAc48ogfR/Net9m/KRd9JK2dmDeJwHz7/luvz0Ngf7tFGy 9VHsxg6Q9Y7XNGOOWRaDBnChcFRVfAUK0WVzO9HJmIAJhfck1sE68FV9YWnU4Q67eVBgl6 F8yfLHufjsk6RaihR3CwBQHPCYCQKMA= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf01.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1739308145; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=P5knolMVwP0R2zovaoahPRToZT6slJoi959foZfSsM0=; b=K9/AaKhZxS4pgaoxqNkmSJB3P7yLfjMsCnawbpSLIvorWCgXklzenCMELDtlAq8PF+qz+S IEpiWLwboF0Jc6JoI2LauWjJyKjqlXhZzNZLD2qZp/IimVQZdvMkVNcxa5hwf2zRcPsGGV HV+vt3gGaneRyO4dXcNtmnjaYMTOvSU= Received: from fangorn.home.surriel.com ([10.0.13.7]) by shelob.surriel.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.97.1) (envelope-from ) id 1thxUX-000000008HU-2WcT; Tue, 11 Feb 2025 16:08:25 -0500 From: Rik van Riel To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, bp@alien8.de, peterz@infradead.org, dave.hansen@linux.intel.com, zhengqi.arch@bytedance.com, nadav.amit@gmail.com, thomas.lendacky@amd.com, kernel-team@meta.com, linux-mm@kvack.org, akpm@linux-foundation.org, jackmanb@google.com, jannh@google.com, mhklinux@outlook.com, andrew.cooper3@citrix.com, Rik van Riel , Manali Shukla Subject: [PATCH v10 01/12] x86/mm: make MMU_GATHER_RCU_TABLE_FREE unconditional Date: Tue, 11 Feb 2025 16:07:56 -0500 Message-ID: <20250211210823.242681-2-riel@surriel.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250211210823.242681-1-riel@surriel.com> References: <20250211210823.242681-1-riel@surriel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: C36534000D X-Stat-Signature: je1xciswe5jr9da4bk77df9cxm5oztf7 X-Rspam-User: X-HE-Tag: 1739308145-653243 X-HE-Meta: U2FsdGVkX1+VrINe16k1AwPyMhW7KI8MBQMEN3e9J/Kp9RDchbA1x8XTVKK7m+17+lC3o75jk5/1yHMzUZoyag9RD+1Ug8HtCcTxsVPY5XK10jPnA2P4UHbjQnJFvF1YBvI31U0P6mpcfqei0Xnu1FpL4TY+3NsFHDkWnxC2abHTyTZTZgQ7O6v2pw80tRa8+dbTneDY9fYqJzcZE57laIwQFnQAwWUgOVBdxTazOnEhYoHePZ0OJ3sRmlqeoUMm8WnctRGKr1Xz1pLKLrivrsVPHMRerUxIe1loS8uZzCoCR3XBeHh0bM/+OtrspQ/qfKSpsySJbE37bvVzgsWCbYStfk3FGUf9zA7NILkHmuT7q7nSJfFM3e3Fjqn/NPyfzWXKv5l+f9bOpMLFoZlqWDfTsgqQ8urGd6XWVlKNzIbAI+0Gd9VigLL37z273g4dYQBqOL/ItBgKxUmI2RHi/T2cD0ftzpVnAR5q9K0tvwWYkTiH0I4YmziDs+xtIhojuTaoznlTenN/oxo5D8q2C0HPw/7uW7/EZ/DWfFNyG+C2juNi+Ro5yDyy8M6nhxrHneKKTP6bk3ocDI7aRm8MjRpTqKs2aXodvGBhh6sZuzACihsgmQb8oSED68sVPGcp/eDb6r9NiCG7IAIz2AK6lwWioLFTinv3lcU4wYGyyv0Gmky0aNaYoxL/zsrpqCI+MF5VvxgZyweVfkg6Fp6l0vPUYWG7NBZBlHyONN1xHavZ2eRY0mI3OUZvazsBV5ELNuSWFJ+BmHBjo6WFbc29FIVHrRPWtD51dcTlUvk45dAXfTu4n5sUHpNo1bAFOo1oNMFsQZo/jN4wM4BRCYmyYo4U4je3E2V63ePeoGUg12pCxXUvt97JAxB11M1jKm0vF+qT8/PaoQHa2/k2pctcC9YkLtI6W8re3OFUPFTm9SYqS5smxrZJUxLIeAW3/HTqwGFypufA5Jy0olsmjjI KIaMHeSe 2AEXZcPyufNLA2lXdvd5l+deO1pv992TYwBT1s8kYsvNLApEgGjpOqrYbu6XqY06sg0eUx3b0g2Hi0PO9cP3eQ8BGklXoQ3nnB7dHO8knaOe4CND4BxaOo/xMgmZsDozoU2pWCtBgu20dO/9IspEodTxHQun/Pjlfko+hYtogNx/GOF+noGn9R8wqI/pS4t+fe1191yNrVVCMCN9GSwcFD6fbU2oI54M6GccXMxUUXz48W25RXIKcpIdnkEJZfvPkGKVRXhs1WoVO05XDlQl79gRyXBSGZHn/34K2vYkU0FitElY+CUwHJ7Mo+K6sC885ssjNJQmxw0LbUkrwUyPHt1EL1kSleDNbG472C6OTvXEaI973nT53XdEno+l9RO+FPPqPyUYrrutiIOmncGpAIoVuvzCBrUJz2OI+dzJONgOUxMghhsdx2eSTgxykVScX89A67zYd7bgubxXCEJhaEC9Tig2we8dM9kh/GS37n18U1EzjNBvSRLRqd1jQQmhWl1iwAKAbXjn69z0Oqts6SZwT664ArlTHTUeGYOD0chlublfqW2MqqoQjnCVUJKN9EXJ/9BU/a8+9oVyesmbw/6DkDQ4MfcV52fksS+jb10H/Djy/Wh+9/A71AnV9YIiUf9hXtxxO1Pj7kyAhH2DpaC1nIQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently x86 uses CONFIG_MMU_GATHER_TABLE_FREE when using paravirt, and not when running on bare metal. There is no real good reason to do things differently for each setup. Make them all the same. Currently get_user_pages_fast synchronizes against page table freeing in two different ways: - on bare metal, by blocking IRQs, which block TLB flush IPIs - on paravirt, with MMU_GATHER_RCU_TABLE_FREE This is done because some paravirt TLB flush implementations handle the TLB flush in the hypervisor, and will do the flush even when the target CPU has interrupts disabled. Always handle page table freeing with MMU_GATHER_RCU_TABLE_FREE. Using RCU synchronization between page table freeing and get_user_pages_fast() allows bare metal to also do TLB flushing while interrupts are disabled. Various places in the mm do still block IRQs or disable preemption as an implicit way to block RCU frees. That makes it safe to use INVLPGB on AMD CPUs. Signed-off-by: Rik van Riel Suggested-by: Peter Zijlstra Tested-by: Manali Shukla Tested-by: Brendan Jackman --- arch/x86/Kconfig | 2 +- arch/x86/kernel/paravirt.c | 7 +------ arch/x86/mm/pgtable.c | 16 ++++------------ mm/mmu_gather.c | 4 ++-- 4 files changed, 8 insertions(+), 21 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 9d7bd0ae48c4..e8743f8c9fd0 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -274,7 +274,7 @@ config X86 select HAVE_PCI select HAVE_PERF_REGS select HAVE_PERF_USER_STACK_DUMP - select MMU_GATHER_RCU_TABLE_FREE if PARAVIRT + select MMU_GATHER_RCU_TABLE_FREE select MMU_GATHER_MERGE_VMAS select HAVE_POSIX_CPU_TIMERS_TASK_WORK select HAVE_REGS_AND_STACK_ACCESS_API diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c index fec381533555..2b78a6b466ed 100644 --- a/arch/x86/kernel/paravirt.c +++ b/arch/x86/kernel/paravirt.c @@ -59,11 +59,6 @@ void __init native_pv_lock_init(void) static_branch_enable(&virt_spin_lock_key); } -static void native_tlb_remove_table(struct mmu_gather *tlb, void *table) -{ - tlb_remove_page(tlb, table); -} - struct static_key paravirt_steal_enabled; struct static_key paravirt_steal_rq_enabled; @@ -191,7 +186,7 @@ struct paravirt_patch_template pv_ops = { .mmu.flush_tlb_kernel = native_flush_tlb_global, .mmu.flush_tlb_one_user = native_flush_tlb_one_user, .mmu.flush_tlb_multi = native_flush_tlb_multi, - .mmu.tlb_remove_table = native_tlb_remove_table, + .mmu.tlb_remove_table = tlb_remove_table, .mmu.exit_mmap = paravirt_nop, .mmu.notify_page_enc_status_changed = paravirt_nop, diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index 5745a354a241..3dc4af1f7868 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -18,14 +18,6 @@ EXPORT_SYMBOL(physical_mask); #define PGTABLE_HIGHMEM 0 #endif -#ifndef CONFIG_PARAVIRT -static inline -void paravirt_tlb_remove_table(struct mmu_gather *tlb, void *table) -{ - tlb_remove_page(tlb, table); -} -#endif - gfp_t __userpte_alloc_gfp = GFP_PGTABLE_USER | PGTABLE_HIGHMEM; pgtable_t pte_alloc_one(struct mm_struct *mm) @@ -54,7 +46,7 @@ void ___pte_free_tlb(struct mmu_gather *tlb, struct page *pte) { pagetable_pte_dtor(page_ptdesc(pte)); paravirt_release_pte(page_to_pfn(pte)); - paravirt_tlb_remove_table(tlb, pte); + tlb_remove_table(tlb, pte); } #if CONFIG_PGTABLE_LEVELS > 2 @@ -70,7 +62,7 @@ void ___pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd) tlb->need_flush_all = 1; #endif pagetable_pmd_dtor(ptdesc); - paravirt_tlb_remove_table(tlb, ptdesc_page(ptdesc)); + tlb_remove_table(tlb, ptdesc_page(ptdesc)); } #if CONFIG_PGTABLE_LEVELS > 3 @@ -80,14 +72,14 @@ void ___pud_free_tlb(struct mmu_gather *tlb, pud_t *pud) pagetable_pud_dtor(ptdesc); paravirt_release_pud(__pa(pud) >> PAGE_SHIFT); - paravirt_tlb_remove_table(tlb, virt_to_page(pud)); + tlb_remove_table(tlb, virt_to_page(pud)); } #if CONFIG_PGTABLE_LEVELS > 4 void ___p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d) { paravirt_release_p4d(__pa(p4d) >> PAGE_SHIFT); - paravirt_tlb_remove_table(tlb, virt_to_page(p4d)); + tlb_remove_table(tlb, virt_to_page(p4d)); } #endif /* CONFIG_PGTABLE_LEVELS > 4 */ #endif /* CONFIG_PGTABLE_LEVELS > 3 */ diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index 99b3e9408aa0..59fd0137af63 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -246,8 +246,8 @@ static void __tlb_remove_table_free(struct mmu_table_batch *batch) * IRQs delays the completion of the TLB flush we can never observe an already * freed page. * - * Architectures that do not have this (PPC) need to delay the freeing by some - * other means, this is that means. + * Architectures that do not use IPIs (PPC, x86 paravirt, AMD INVLPGB, ARM64) + * need to delay the freeing by some other means, this is that means. * * What we do is batch the freed directory pages (tables) and RCU free them. * We use the sched RCU variant, as that guarantees that IRQ/preempt disabling