From patchwork Mon Jan 20 02:40:09 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rik van Riel X-Patchwork-Id: 13944705 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14C08C02187 for ; Mon, 20 Jan 2025 02:42:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3E9F06B008C; Sun, 19 Jan 2025 21:42:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E58836B0093; Sun, 19 Jan 2025 21:42:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9C9FF280005; Sun, 19 Jan 2025 21:42:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 66CD26B0088 for ; Sun, 19 Jan 2025 21:42:30 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 08B4F8266F for ; Mon, 20 Jan 2025 02:42:30 +0000 (UTC) X-FDA: 83026281660.13.98E0BE7 Received: from shelob.surriel.com (shelob.surriel.com [96.67.55.147]) by imf19.hostedemail.com (Postfix) with ESMTP id 737A71A0010 for ; Mon, 20 Jan 2025 02:42:28 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=none; spf=pass (imf19.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737340948; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fGSmCRe0+cRBeDTp5bUo5pBeMMwXGS2kevdWEnm8DLs=; b=E6KbmwyeKyUW6AotLMtbEieNGrBDHNjr7VxrE8YFvTC4ZYck0y+3cDmafh7jyIWGa9c/Ab bKo3/5HUJsQ0Cqol2q5H2Osh3pHfGq3m8gKYhbWjUY9yta+Fn/m1IcyBV5G7e1WIk4fOKM bWC7za8ZK0qTVYQXXZj4MyCWkTlT+Zw= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737340948; a=rsa-sha256; cv=none; b=wUSDnDMLBamjczG2OBzedunGalLIKDMcaubEWv6FQ6VP5i7Qfz+bZBpwR2C0497UdiWdX7 nc8OmXQ3hhL2ADb4qDBnpURY6+/iBMVy8iJTDUI+AZ3CCqP79zQxq+REqnUOZFOZs6Cuwo VzF+uozNPcXn7VwoBIG9MSVsS9rMIec= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=none; spf=pass (imf19.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com; dmarc=none Received: from fangorn.home.surriel.com ([10.0.13.7]) by shelob.surriel.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.97.1) (envelope-from ) id 1tZhis-000000002w5-0gJU; Sun, 19 Jan 2025 21:41:06 -0500 From: Rik van Riel To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, bp@alien8.de, peterz@infradead.org, dave.hansen@linux.intel.com, zhengqi.arch@bytedance.com, nadav.amit@gmail.com, thomas.lendacky@amd.com, kernel-team@meta.com, linux-mm@kvack.org, akpm@linux-foundation.org, jannh@google.com, mhklinux@outlook.com, andrew.cooper3@citrix.com, Rik van Riel Subject: [PATCH v6 01/12] x86/mm: make MMU_GATHER_RCU_TABLE_FREE unconditional Date: Sun, 19 Jan 2025 21:40:09 -0500 Message-ID: <20250120024104.1924753-2-riel@surriel.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250120024104.1924753-1-riel@surriel.com> References: <20250120024104.1924753-1-riel@surriel.com> MIME-Version: 1.0 X-Stat-Signature: uqgahiy7sfb9pgdoqzkuyiktmebutf6g X-Rspamd-Queue-Id: 737A71A0010 X-Rspam-User: X-Rspamd-Server: rspam06 X-HE-Tag: 1737340948-801679 X-HE-Meta: U2FsdGVkX18LHen7hTZ0nagr0dNm6+fXKUaC/jLEFXDpV9bUS7wgv4yKH2/suXkg9S1EbPiSJ5mrmemp23x4xNxtUWMkuRJUDrhT3BOpWYmSA5jONvLull9FtboFHhyLbnOfLG+p1umTDFIHMUZTQtUije6TeeXPnc+f5EMnxx+xUAEIX4DjF4ONnR00T93eQYiIV9ugxq3lZ5bUKnZeXvkvDZzdH6ZDLzKBgy0chOwRtymBAS83eFBPfOQ03HdNdldbTTodGlhK1YgSw5IWwTRATRO2I+YuHQju2ywlYo7v0Bm3zwIXKMXI0kwtMusFP8XOVHGD4O2G4gnF44JcSdvt5WdNLrA33wJAs47aZYkRwxu2ZDToA7mpWvrEBqgBjrC+D1nLYRCFk2OIpK3lGoMxsgJ55LISR27udIXnoWnuoyxsgin+X3fqIFkeLuZ9GDGeVAoR3Ppz2WOATm8VHALVYnTxJEc0p96jKc7b24QEcO1fNFW82jyO4zydhjQ5yka05MbtG1o4q9DrwJp7unF808QkwWlrrZRJ23fTR9k7kPKPUoNGLtqpbzYjuiVrBeMPYozu5dqT75RfDzHmDKns+gl1DIOv/k+6c0HJzv1ltluDYj/8lk1yCyQc9lFvJAZyrrf8iVKsOwAAF0pBSZoDu7h5haXJFcl4E+d6DVFGhT9YFXiD/uswl8nKZ5uTeZ3WIkbGUv7Hcam7QuCUvtxk3I6wgyDrmnKuywfE76TC6B+e9/Pr9DMuO1uAaTkfwtiSBtYmlhQfCrtnEZC/4GuZInB+H6Im8S273Ms79pZL7m/aCWBJFRBDNkt7V0BqlY7bA5gBCwHPOlJGH/PHoIVTEfKL/OFON+dlHarAikJZVrO+4Tbr++5nZMnhgSM/XA5T6ZrfTNkvXSxeixesOAPbS/P4jnpy+pggtDt8N1YUyY2HAe6cxEw51Dwp9bi3PRhcFzcxKkRnctkJ5iG fsLr9ZIy PnBBZdXS2UFc5pUGdoaEdmBGbjHSh295/anfKd+kQFcAqAOWN8rLp7+YMwO0AyjQIDJag1imzsGFCvYppSCg/eyyomekj/DYNvhOwv0V5bJOrXcsOi80/GJSzraUHaHwVaQJL6EVsDRCXyB9nCbAu1ddgfEbqN5EpHYVFgs3R/GxO9w+F5oAGvMF3DitddNsxpwxKmyc9BcJcsJmACsBGGkJDGQoC8dkg+LMVL1sOFFFEAOOv9l7cH0DurFYshQnps1KGlH88lVIupuMaSgC9EZddUTgabII0Xqok9d1Sn0235NqxFxbKzkCsvZH6DRoXnIWMXF8GGzhEVt3+4A+op11zFxZA6KtYYIvJX8FOXtsiKCBYWG/mjcU0EOzu9IDWZ3Tm3jo3/41Kn9+rd8EIE420CxsKcSySbllijKB3eAOR4Ntegr4IFSReUgkxVeBpu6JH0x+nWewGkLbUXO9kHRNLJ84U/UkxfEpDfHS3GFGvkOexyHqTW3SYGA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently x86 uses CONFIG_MMU_GATHER_TABLE_FREE when using paravirt, and not when running on bare metal. There is no real good reason to do things differently for each setup. Make them all the same. Currently get_user_pages_fast synchronizes against page table freeing in two different ways: - on bare metal, by blocking IRQs, which block TLB flush IPIs - on paravirt, with MMU_GATHER_RCU_TABLE_FREE This is done because some paravirt TLB flush implementations handle the TLB flush in the hypervisor, and will do the flush even when the target CPU has interrupts disabled. Always handle page table freeing with MMU_GATHER_RCU_TABLE_FREE. Using RCU synchronization between page table freeing and get_user_pages_fast() allows bare metal to also do TLB flushing while interrupts are disabled. That makes it safe to use INVLPGB on AMD CPUs. Signed-off-by: Rik van Riel Suggested-by: Peter Zijlstra --- arch/x86/Kconfig | 2 +- arch/x86/kernel/paravirt.c | 7 +------ 2 files changed, 2 insertions(+), 7 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 9d7bd0ae48c4..e8743f8c9fd0 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -274,7 +274,7 @@ config X86 select HAVE_PCI select HAVE_PERF_REGS select HAVE_PERF_USER_STACK_DUMP - select MMU_GATHER_RCU_TABLE_FREE if PARAVIRT + select MMU_GATHER_RCU_TABLE_FREE select MMU_GATHER_MERGE_VMAS select HAVE_POSIX_CPU_TIMERS_TASK_WORK select HAVE_REGS_AND_STACK_ACCESS_API diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c index fec381533555..2b78a6b466ed 100644 --- a/arch/x86/kernel/paravirt.c +++ b/arch/x86/kernel/paravirt.c @@ -59,11 +59,6 @@ void __init native_pv_lock_init(void) static_branch_enable(&virt_spin_lock_key); } -static void native_tlb_remove_table(struct mmu_gather *tlb, void *table) -{ - tlb_remove_page(tlb, table); -} - struct static_key paravirt_steal_enabled; struct static_key paravirt_steal_rq_enabled; @@ -191,7 +186,7 @@ struct paravirt_patch_template pv_ops = { .mmu.flush_tlb_kernel = native_flush_tlb_global, .mmu.flush_tlb_one_user = native_flush_tlb_one_user, .mmu.flush_tlb_multi = native_flush_tlb_multi, - .mmu.tlb_remove_table = native_tlb_remove_table, + .mmu.tlb_remove_table = tlb_remove_table, .mmu.exit_mmap = paravirt_nop, .mmu.notify_page_enc_status_changed = paravirt_nop, From patchwork Mon Jan 20 02:40:10 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rik van Riel X-Patchwork-Id: 13944708 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C31F8C02188 for ; Mon, 20 Jan 2025 02:42:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 29950280004; Sun, 19 Jan 2025 21:42:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 223EB280003; Sun, 19 Jan 2025 21:42:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 09C8A280004; Sun, 19 Jan 2025 21:42:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id E169C280003 for ; Sun, 19 Jan 2025 21:42:35 -0500 (EST) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 9B8D5A263E for ; Mon, 20 Jan 2025 02:42:35 +0000 (UTC) X-FDA: 83026281870.29.8FFEC15 Received: from shelob.surriel.com (shelob.surriel.com [96.67.55.147]) by imf30.hostedemail.com (Postfix) with ESMTP id 256D080004 for ; Mon, 20 Jan 2025 02:42:33 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=none; spf=pass (imf30.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737340954; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dHsoEyJZ5XLC7JY7dTW5E1So6h81yzYLCyPTrStsqgs=; b=wuTSMiurbs4tDMz/G7b5UrdGk7GJN8eQcmIMl/N9pS1HYO8o35zl9Av515T9osui+dJtg5 M/tVWPyfbi4FOv+bRV0OyIrlavPiDvpv9qZf4+X8oo4DHm+5bQ4upxSKC1PJZAcvI6BcOF f1KEKzMbFmEB64vzDg/IECyJP0NE1w0= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=none; spf=pass (imf30.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737340954; a=rsa-sha256; cv=none; b=m86U6ZVRl5gbThccPdZgn/3sLMYZddjqJ+efP8eNiaBsZVjbx/madPSw97kiAAKbUrmJml O/nytFos5jaiVkEF4Q5RfqAaNznY6eGQTqqoK7GDXIkLBez9LBy8Qbuoumuc6WF1VlrDom U4nez5n8/LOjX9SUBwvFmDsG4GxIPLg= Received: from fangorn.home.surriel.com ([10.0.13.7]) by shelob.surriel.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.97.1) (envelope-from ) id 1tZhis-000000002w5-0lGF; Sun, 19 Jan 2025 21:41:06 -0500 From: Rik van Riel To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, bp@alien8.de, peterz@infradead.org, dave.hansen@linux.intel.com, zhengqi.arch@bytedance.com, nadav.amit@gmail.com, thomas.lendacky@amd.com, kernel-team@meta.com, linux-mm@kvack.org, akpm@linux-foundation.org, jannh@google.com, mhklinux@outlook.com, andrew.cooper3@citrix.com, Rik van Riel Subject: [PATCH v6 02/12] x86/mm: remove pv_ops.mmu.tlb_remove_table call Date: Sun, 19 Jan 2025 21:40:10 -0500 Message-ID: <20250120024104.1924753-3-riel@surriel.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250120024104.1924753-1-riel@surriel.com> References: <20250120024104.1924753-1-riel@surriel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 256D080004 X-Stat-Signature: 7e1fpmmwafw1gygatijom8575ffywbyn X-Rspam-User: X-HE-Tag: 1737340953-280521 X-HE-Meta: U2FsdGVkX1/wOMhTTmqww0Jm70z5YFU+XDNcEmHBa2wP69w508hx0hqs/ecAiBe6LSDjPhQK9hVkvcSbqLqibNCq4Ekz853O/6XOLDfwzbxi6MnrsRrd5qN+IpMrRZJ8Nr3Q4yPvDLVXarbKrjRYmw9A1ZVnNiQuqRzNiIUFvy5kJj8WV47VCrzZ2zMWEKlTYmdm/R3qA2GvAS3xFCk9K4YExuoZMMromWkQBpd4NeUpMLUAMJoDmZ4BbKT6l8hkK37h7Q2D5xzinquzJhjCuxZJ2S7ZmTbNfdlxToPWj4SGSVuU4WxCE9tXyfp3vIdxQql/XfdiHRqLZMwi77THzL2X3P0eXDLA+eqxMD1LsJoWylSSomdaK9Dp/fei4AVwOW5y8xYA9SHYxFOOqHJysTJPyb0uUfAkBVBPdSFNXzNno9eoaDvo+XYAanO2qIvxry910LmkzOxVgBp6t6ou09s5VgxPaMmhd0jbAhdqVC6D63CYG12wYlo0Sn9m90nvrMd5iUBEKButqvoKgXY/4EOZsQZdzj4apafxmN0qKqidgY/AsfnA+LXTBHxqyjiLgSKw5GKmBpnV7VQj9P1i04yymKj8OktbOxzlAdLyZrCcnI25kc9tLuIKcVqzZOkaGwqOa7DcZgN2K9XVAtA9eHD+k71LlHen3lbt98vL4WbJkA/u6xIVrxvz9JCyBr/Xhgdym9XgwXv+yAfgioIAGMBgwBt9h7q+J8toIssROxrMGFtHRnzHoishQ5/tyWQ9C1CdHYGtSTRvmsmYiYaSotu4EbpMR5bLbn4/yzbpVFqMQFB+nnUX5y8CMZO/mjQCKF+EAHCRPVzj0K1soCJgIn9P5P9pCQ2muKMryNHO7OoouNgeiizoGR3Wsu4JrbZ8E0seoWNzYCjUF4RegudbG7vML8Z1HE95zWT2ebH+u9bgF5cUbYsmxiettSwQ4tOg9J6jPC90CtDXqGmtmYt S4h319il j/CN9fr4EVeza094lQAblR+soQl08NQCnXvpMMINW7hYV45I1bi79KoE8WBy3cXkLaovZTF/Hh2bBpqqiFBGVfKhgpGKl2Mt+pzkz1qwOkI9TTAAwbYFZ1nJOiPl1mRQCd5y4FVYMwKPrCEJEJO/xfThreSpXX4VV8AcYEOz68GrFPNJpUgSzVE4yTx2Z7HUz2Jzy73sGTgPpV+q2Dd3QhDG+Og0BaW3OksDdUdKuiTGIXGhC7uhFhx2/3yEdo5O0dEEGOKLhXwdDbUPH0YqLxRYVYqzo6JPLsiI1/uDPPTlyCHt4qtURacA6V4Bac57+8TStM4Nl3lMlgCaffDsZsCvi0sxPUJxOsVFd3BKlvDNcDkxL9y7ItZe26NP4/fFSqlx+RVZfRUcgM1KsEOriPFrzLwKapoqZakl64yZ2YWTwwEeGTTHqL6ne0zqGayp8ieimnnkU5qFogzE= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Every pv_ops.mmu.tlb_remove_table call ends up calling tlb_remove_table. Get rid of the indirection by simply calling tlb_remove_table directly, and not going through the paravirt function pointers. Signed-off-by: Rik van Riel Suggested-by: Qi Zheng --- arch/x86/hyperv/mmu.c | 1 - arch/x86/include/asm/paravirt.h | 5 ----- arch/x86/include/asm/paravirt_types.h | 2 -- arch/x86/kernel/kvm.c | 1 - arch/x86/kernel/paravirt.c | 1 - arch/x86/mm/pgtable.c | 16 ++++------------ arch/x86/xen/mmu_pv.c | 1 - 7 files changed, 4 insertions(+), 23 deletions(-) diff --git a/arch/x86/hyperv/mmu.c b/arch/x86/hyperv/mmu.c index 1cc113200ff5..cbe6c71e17c1 100644 --- a/arch/x86/hyperv/mmu.c +++ b/arch/x86/hyperv/mmu.c @@ -240,5 +240,4 @@ void hyperv_setup_mmu_ops(void) pr_info("Using hypercall for remote TLB flush\n"); pv_ops.mmu.flush_tlb_multi = hyperv_flush_tlb_multi; - pv_ops.mmu.tlb_remove_table = tlb_remove_table; } diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h index d4eb9e1d61b8..794ba3647c6c 100644 --- a/arch/x86/include/asm/paravirt.h +++ b/arch/x86/include/asm/paravirt.h @@ -91,11 +91,6 @@ static inline void __flush_tlb_multi(const struct cpumask *cpumask, PVOP_VCALL2(mmu.flush_tlb_multi, cpumask, info); } -static inline void paravirt_tlb_remove_table(struct mmu_gather *tlb, void *table) -{ - PVOP_VCALL2(mmu.tlb_remove_table, tlb, table); -} - static inline void paravirt_arch_exit_mmap(struct mm_struct *mm) { PVOP_VCALL1(mmu.exit_mmap, mm); diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h index 8d4fbe1be489..13405959e4db 100644 --- a/arch/x86/include/asm/paravirt_types.h +++ b/arch/x86/include/asm/paravirt_types.h @@ -136,8 +136,6 @@ struct pv_mmu_ops { void (*flush_tlb_multi)(const struct cpumask *cpus, const struct flush_tlb_info *info); - void (*tlb_remove_table)(struct mmu_gather *tlb, void *table); - /* Hook for intercepting the destruction of an mm_struct. */ void (*exit_mmap)(struct mm_struct *mm); void (*notify_page_enc_status_changed)(unsigned long pfn, int npages, bool enc); diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index 7a422a6c5983..3be9b3342c67 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -838,7 +838,6 @@ static void __init kvm_guest_init(void) #ifdef CONFIG_SMP if (pv_tlb_flush_supported()) { pv_ops.mmu.flush_tlb_multi = kvm_flush_tlb_multi; - pv_ops.mmu.tlb_remove_table = tlb_remove_table; pr_info("KVM setup pv remote TLB flush\n"); } diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c index 2b78a6b466ed..c019771e0123 100644 --- a/arch/x86/kernel/paravirt.c +++ b/arch/x86/kernel/paravirt.c @@ -186,7 +186,6 @@ struct paravirt_patch_template pv_ops = { .mmu.flush_tlb_kernel = native_flush_tlb_global, .mmu.flush_tlb_one_user = native_flush_tlb_one_user, .mmu.flush_tlb_multi = native_flush_tlb_multi, - .mmu.tlb_remove_table = tlb_remove_table, .mmu.exit_mmap = paravirt_nop, .mmu.notify_page_enc_status_changed = paravirt_nop, diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index 5745a354a241..3dc4af1f7868 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -18,14 +18,6 @@ EXPORT_SYMBOL(physical_mask); #define PGTABLE_HIGHMEM 0 #endif -#ifndef CONFIG_PARAVIRT -static inline -void paravirt_tlb_remove_table(struct mmu_gather *tlb, void *table) -{ - tlb_remove_page(tlb, table); -} -#endif - gfp_t __userpte_alloc_gfp = GFP_PGTABLE_USER | PGTABLE_HIGHMEM; pgtable_t pte_alloc_one(struct mm_struct *mm) @@ -54,7 +46,7 @@ void ___pte_free_tlb(struct mmu_gather *tlb, struct page *pte) { pagetable_pte_dtor(page_ptdesc(pte)); paravirt_release_pte(page_to_pfn(pte)); - paravirt_tlb_remove_table(tlb, pte); + tlb_remove_table(tlb, pte); } #if CONFIG_PGTABLE_LEVELS > 2 @@ -70,7 +62,7 @@ void ___pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd) tlb->need_flush_all = 1; #endif pagetable_pmd_dtor(ptdesc); - paravirt_tlb_remove_table(tlb, ptdesc_page(ptdesc)); + tlb_remove_table(tlb, ptdesc_page(ptdesc)); } #if CONFIG_PGTABLE_LEVELS > 3 @@ -80,14 +72,14 @@ void ___pud_free_tlb(struct mmu_gather *tlb, pud_t *pud) pagetable_pud_dtor(ptdesc); paravirt_release_pud(__pa(pud) >> PAGE_SHIFT); - paravirt_tlb_remove_table(tlb, virt_to_page(pud)); + tlb_remove_table(tlb, virt_to_page(pud)); } #if CONFIG_PGTABLE_LEVELS > 4 void ___p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d) { paravirt_release_p4d(__pa(p4d) >> PAGE_SHIFT); - paravirt_tlb_remove_table(tlb, virt_to_page(p4d)); + tlb_remove_table(tlb, virt_to_page(p4d)); } #endif /* CONFIG_PGTABLE_LEVELS > 4 */ #endif /* CONFIG_PGTABLE_LEVELS > 3 */ diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c index 55a4996d0c04..041e17282af0 100644 --- a/arch/x86/xen/mmu_pv.c +++ b/arch/x86/xen/mmu_pv.c @@ -2137,7 +2137,6 @@ static const typeof(pv_ops) xen_mmu_ops __initconst = { .flush_tlb_kernel = xen_flush_tlb, .flush_tlb_one_user = xen_flush_tlb_one_user, .flush_tlb_multi = xen_flush_tlb_multi, - .tlb_remove_table = tlb_remove_table, .pgd_alloc = xen_pgd_alloc, .pgd_free = xen_pgd_free, From patchwork Mon Jan 20 02:40:11 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rik van Riel X-Patchwork-Id: 13944704 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1C93C02187 for ; Mon, 20 Jan 2025 02:42:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1884F6B0088; Sun, 19 Jan 2025 21:42:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C8185280007; Sun, 19 Jan 2025 21:42:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7C78D6B0093; Sun, 19 Jan 2025 21:42:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 0000A6B008A for ; Sun, 19 Jan 2025 21:42:29 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id B0C8DC2650 for ; Mon, 20 Jan 2025 02:42:29 +0000 (UTC) X-FDA: 83026281618.04.39BAA89 Received: from shelob.surriel.com (shelob.surriel.com [96.67.55.147]) by imf08.hostedemail.com (Postfix) with ESMTP id 34120160002 for ; Mon, 20 Jan 2025 02:42:27 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf08.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737340948; a=rsa-sha256; cv=none; b=TrWgU6WVZ6CMSanrIpxqKL34g8Er3mB1YkRd1TyHjbaJff10sLzcLCUdoeOYKFhS7D/qR5 RWorjeexcBL9i2cEiWr2okyM08j6vf7ixb/cOjnFCj+O3RCLTPKf1oAbsISWFvG61tctf+ jBAIhN3+bMa6RVTwdKO6Kla9734ZRQM= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf08.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737340948; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Jc2TQoSL3j7v6SJaeBe+lsW7ecYjzyMkuVMBWMOAINM=; b=AZRfUxz70VjnkIJaLe2OCdU+ZICtTqCUnZJDXWZXOLAv+wPm3OFZ+qzungqmkeOyH8XzBq pWM0XmlEAVdBELsWE3YqVD8h4npFFzrBncZY2nOq/8h7O1hCuguE1fyUvzn1bqpymyOF9A cNscAF8qE+/Aloyw68uTwTkvwDSClBI= Received: from fangorn.home.surriel.com ([10.0.13.7]) by shelob.surriel.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.97.1) (envelope-from ) id 1tZhis-000000002w5-0qpL; Sun, 19 Jan 2025 21:41:06 -0500 From: Rik van Riel To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, bp@alien8.de, peterz@infradead.org, dave.hansen@linux.intel.com, zhengqi.arch@bytedance.com, nadav.amit@gmail.com, thomas.lendacky@amd.com, kernel-team@meta.com, linux-mm@kvack.org, akpm@linux-foundation.org, jannh@google.com, mhklinux@outlook.com, andrew.cooper3@citrix.com, Rik van Riel , Dave Hansen Subject: [PATCH v6 03/12] x86/mm: consolidate full flush threshold decision Date: Sun, 19 Jan 2025 21:40:11 -0500 Message-ID: <20250120024104.1924753-4-riel@surriel.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250120024104.1924753-1-riel@surriel.com> References: <20250120024104.1924753-1-riel@surriel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 34120160002 X-Stat-Signature: e8hhruetb936nfh1eafgihy8icmusnm8 X-Rspam-User: X-HE-Tag: 1737340947-386713 X-HE-Meta: U2FsdGVkX1/HGPOBavUETtYY5S0Y0AMet/H1ur8seOB18RTNyMALoLpzOW0OA7kwJmBM4nnEg9Cb/on8NhPlc2twSrmwnZnwC1nOhzzwIuzDML1KJkSt9nVt2W6AWgtMHWsC7OJ9UPgxAOuAYc6JgnBwFxYSmVoKeHAUKS7n3E/Jv4vfjyI9wNvLbg3P3Uttv93J5klERTr2WEovgkRA44kT/rRyNIc4GBwxV+3A0E+jm8fUTNZ//vZebivAF2XXza5Q+Dg4WRN5ninIoh2dQpCn7r0+zVapI6uCBQQlrxPG5wQn0Jjb9UCo0ysdPgqZnDXcvJQyWAxzTmmQrADLU8P2UA4TYY/t1aTtff9WdXw4r4cjp8IVnQcS7ADFAFQFg0F7bf9lEI6S9N7DeXKPkQbN7fxIaAuDBQ/LwY79DO92mwPeoZPjniE2m27nocU9LwNwB7xMDHelp3+PoEwmicmqiGvONwTpBZI1ooUKlBdxk+A/etn8aHa37b71/WKca97xFYb8SM/RlQHWpY9TUs/6hWZGfUnRIrNXnEAYTql2OJnFXnOzEyboMhZHy8fAsJqkBb8+d3aLa5CERuIEc1cs3UBgTfKsXaM4+uSyLdY/oVqvf69QhfEm7SrZ8SLoulGZ63mcEmLqj9YbvlKQlXd0tXialDOWs/kLlGrOzW6ejaE5q4whoRhEkNHaOAWkSAXR+5u3FTCHr7b1hTaWeGHf25YCtKhj2MVVhp8qfoUrh1qNIs8xMTZkyqjGnbgQsM5dd3JZSrzqvgP5yK4BJo9Q+xzBfsKA1VXA9ey9SwrPQMx5isxb2m3v+oPSCxP8P+Vo/Iv+Oco9RsaSs+sfgoy8NTuBdM3sVYpGWaAI911S13xugUISI/QfjKgrHcssp/2oGWr4hBjvudcqXwU8IaTqJbzUxn1nEisvYmwSyI4F5OV8VOd+u7lc37rmkeoeUZ7LxZzYaCeLUsJ/gne HdgJdPks fwemmcyptugpwJxHbwtN/quVfmMbp5JhjvZAlBaZ+iGyxjEvwdil2GNmeVxH+1x3S88ZML/kLNb4R/GV3oHf7ZiuuWLaaaB5magJZxdZwXJ5cCegMkkd5vg9wUuYfLV9sVhfxnpqLPS9go+iWUWbNQJLUFMkbaq701qcJ4Z877Q3gpl49g+jZm6MdlhLScY7LO9Jpt4whquERIoso3gvxj81BivdBf7A38TSbmwmsBmusnOOgBaJpxPoNhclp6lm2WcVbggfhUdxGgyJ6jgDFczRL2NeKqKASv2zX9UER1ljU4TvJyvGe7wlVH2cEOiQB7Bq/39c19f+yFfj6tcJp1f7TODYzdzM4dr70o/+KppGzQIDcHRbFbqXjQyjf46QR97OL7SXDAMILkkw2vNmlgn7feQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Reduce code duplication by consolidating the decision point for whether to do individual invalidations or a full flush inside get_flush_tlb_info. Signed-off-by: Rik van Riel Suggested-by: Dave Hansen --- arch/x86/mm/tlb.c | 43 ++++++++++++++++++++----------------------- 1 file changed, 20 insertions(+), 23 deletions(-) diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 6cf881a942bb..4c2feb7259b1 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -1009,6 +1009,15 @@ static struct flush_tlb_info *get_flush_tlb_info(struct mm_struct *mm, info->initiating_cpu = smp_processor_id(); info->trim_cpumask = 0; + /* + * If the number of flushes is so large that a full flush + * would be faster, do a full flush. + */ + if ((end - start) >> stride_shift > tlb_single_page_flush_ceiling) { + info->start = 0; + info->end = TLB_FLUSH_ALL; + } + return info; } @@ -1026,17 +1035,8 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, bool freed_tables) { struct flush_tlb_info *info; + int cpu = get_cpu(); u64 new_tlb_gen; - int cpu; - - cpu = get_cpu(); - - /* Should we flush just the requested range? */ - if ((end == TLB_FLUSH_ALL) || - ((end - start) >> stride_shift) > tlb_single_page_flush_ceiling) { - start = 0; - end = TLB_FLUSH_ALL; - } /* This is also a barrier that synchronizes with switch_mm(). */ new_tlb_gen = inc_mm_tlb_gen(mm); @@ -1089,22 +1089,19 @@ static void do_kernel_range_flush(void *info) void flush_tlb_kernel_range(unsigned long start, unsigned long end) { - /* Balance as user space task's flush, a bit conservative */ - if (end == TLB_FLUSH_ALL || - (end - start) > tlb_single_page_flush_ceiling << PAGE_SHIFT) { - on_each_cpu(do_flush_tlb_all, NULL, 1); - } else { - struct flush_tlb_info *info; + struct flush_tlb_info *info; - preempt_disable(); - info = get_flush_tlb_info(NULL, start, end, 0, false, - TLB_GENERATION_INVALID); + guard(preempt)(); + info = get_flush_tlb_info(NULL, start, end, PAGE_SHIFT, false, + TLB_GENERATION_INVALID); + + if (info->end == TLB_FLUSH_ALL) + on_each_cpu(do_flush_tlb_all, NULL, 1); + else on_each_cpu(do_kernel_range_flush, info, 1); - put_flush_tlb_info(); - preempt_enable(); - } + put_flush_tlb_info(); } /* @@ -1276,7 +1273,7 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch) int cpu = get_cpu(); - info = get_flush_tlb_info(NULL, 0, TLB_FLUSH_ALL, 0, false, + info = get_flush_tlb_info(NULL, 0, TLB_FLUSH_ALL, PAGE_SHIFT, false, TLB_GENERATION_INVALID); /* * flush_tlb_multi() is not optimized for the common case in which only From patchwork Mon Jan 20 02:40:12 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rik van Riel X-Patchwork-Id: 13944707 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8FEF7C02187 for ; Mon, 20 Jan 2025 02:42:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 96B476B0095; Sun, 19 Jan 2025 21:42:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3A75D6B0096; Sun, 19 Jan 2025 21:42:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0C4C26B008C; Sun, 19 Jan 2025 21:42:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id C5388280006 for ; Sun, 19 Jan 2025 21:42:30 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 8A9C280749 for ; Mon, 20 Jan 2025 02:42:30 +0000 (UTC) X-FDA: 83026281660.13.5C7E259 Received: from shelob.surriel.com (shelob.surriel.com [96.67.55.147]) by imf01.hostedemail.com (Postfix) with ESMTP id 08D4B40006 for ; Mon, 20 Jan 2025 02:42:28 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=none; spf=pass (imf01.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737340949; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Be0cFZNtRXMujQiq/YiC9LCSjjT1Aon5w4ZILw0GV5E=; b=Dd37oMsPtbW4ALKanGSu4JlamSbkCS+V1cxIvuo17CWneEgDv8P5Irm6dPf5q6vIikPAXN ClGGXuywu4Fco0PV92nXUMVV7/pKCO/UVCvJP3L1T2CM+LUOXWbg6IHQ43swJyBTsDiMMc R9JtsWNcEvsTWwmId90jrvMuu5UIok4= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=none; spf=pass (imf01.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737340949; a=rsa-sha256; cv=none; b=yVB5Wo/NeFXaxcWPqZwQI6mTvbsE87gFYp3OhCjO7oho+uoNLNvMPuxFMWW7ugffZzFfRL Cxe56sqaDO6B7kQJq8y+jhofP2Kx8pCjtm5EN63eeePBBFKdabZe+AcEEn4JJszm7POxXI sirXzz9yGG2xqv8KAeGCpz7iH+e3Gfk= Received: from fangorn.home.surriel.com ([10.0.13.7]) by shelob.surriel.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.97.1) (envelope-from ) id 1tZhis-000000002w5-0wWs; Sun, 19 Jan 2025 21:41:06 -0500 From: Rik van Riel To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, bp@alien8.de, peterz@infradead.org, dave.hansen@linux.intel.com, zhengqi.arch@bytedance.com, nadav.amit@gmail.com, thomas.lendacky@amd.com, kernel-team@meta.com, linux-mm@kvack.org, akpm@linux-foundation.org, jannh@google.com, mhklinux@outlook.com, andrew.cooper3@citrix.com, Rik van Riel Subject: [PATCH v6 04/12] x86/mm: get INVLPGB count max from CPUID Date: Sun, 19 Jan 2025 21:40:12 -0500 Message-ID: <20250120024104.1924753-5-riel@surriel.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250120024104.1924753-1-riel@surriel.com> References: <20250120024104.1924753-1-riel@surriel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 08D4B40006 X-Stat-Signature: ckt8x631edtpw9ugradto6a45p7igw7m X-Rspam-User: X-HE-Tag: 1737340948-810835 X-HE-Meta: U2FsdGVkX1/szOldOeGYexpjExOqdkPj9Jtcyp2UQntb8WoS+DC7LOJPBLAUQ0tyL8ipKbIF1rgTejXZMGZ4C5r9oMFgH0pVIhGHC7KKe4CiO8yWYPF/0jQ6WUJ2kf0NIu702mzzPkG+0DjSbjD10iBRW59rBvTXWNIPtzDyABbZpOT2ofDP1xfJ+jSypM/qUwoU5DdCXkFkXlwptSlZ/kpG1MF3jC8RfVAobE5VqBMxIVYDXOhfbQ6e3w/pZ9kuk9S3cc/ZF+eaP/D1KCwFEYneU7aab5jwJrjx1xLX2FMJPsa0Crwiw/VxjLCIdQc7lF0jV6GP0cdD1U6aU6Cn9o0G5DvfJgX+I9nTGb1kHzX6iGbWNoNKrSYyUuHZi6mJYNZG0uPH+19wtK3LEdyeJP8LzIurzVWR5DlITZyo6Neh9MVI4JFN+2RcQWdz589UDiwdqe4p7PUOOCtc9//Zr/XTt6HqTsyxEk4+KZyGKU8VrSTeOicTEOqcvITQBbnbDGKT3buMWzNJMM+P+SZ5tUMo6b9ggzzo2pspoSFWLuBi+AHAAD3XzUGW2DXg9r7ml0RAFmeFxsvWhCwD4YVh5vz6QP3fltSdhNuoJO2zanA2dqt1FzQ6Q1BUEjt/Wk3S+f4KbCEJPWAIvvVbq162KN+vxJpRudCfEvYY40XUP7AmIBeffCbo/vxBaQy8kiAqMOuWjNrWuS/H7TSSGKSvWOdRWfFO5Td8hTaPdAepwsct8DzZVb0+4+uhivw+efgNdDKv/ukTNTQ8qUs/vxbQ7i5f50dBtofgYpwwOlEs80qCTlxXP9Iw7w05yScNdzDhoo0lJsi50Ks41gimXW9hEp53N/JNaNRW6df6iNDmPM0zO1W7vOBkwFGGKjAxl6aupKhlRDOQkMitR6EeqXGy+caOBg7iQKmBB4l97M5Da5jSjVKD5WP4UbxQlyhiEVgK0RMbj2rgFyn2xiQRHHF Lh7GsOT6 Dqm3qzZIDHkAC0owQ8dfikintDEAyOBtNFb8HEvJGiVx1HXdHkQIF0xmKa2Nm7LX0a8gI0rosv80fVRU0uCrD8u25psiDS6sAVczXU6f62ALfPFLbGMyLU5vDh838puX5Gn1DagIZR62eZ+/cCW/ZiH/ZSfbqzh7nM+ngZT6k+7H+b3UqTZWXC6VafEzGuOlp1FmaBmFI2MuYnTxeigtT6iMkpm/FqpdLIl72BhGmC2I7QbStUfgJVg0SAhLhD6c5lSqj8n0S5d2rPmhDLgmcoDJiFSKywe9/B1cO8jPU8Xfd3y5Y/cPVFHeTihxU88EAXttd77j061KdUEO77SN96X01Q5WjnIzqqSb8X3B8afdu8YMnQVEw369Hs8Ss9N6E2F8Y X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The CPU advertises the maximum number of pages that can be shot down with one INVLPGB instruction in the CPUID data. Save that information for later use. Signed-off-by: Rik van Riel --- arch/x86/Kconfig.cpu | 5 +++++ arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/asm/tlbflush.h | 7 +++++++ arch/x86/kernel/cpu/amd.c | 8 ++++++++ 4 files changed, 21 insertions(+) diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu index 2a7279d80460..abe013a1b076 100644 --- a/arch/x86/Kconfig.cpu +++ b/arch/x86/Kconfig.cpu @@ -395,6 +395,10 @@ config X86_VMX_FEATURE_NAMES def_bool y depends on IA32_FEAT_CTL +config X86_BROADCAST_TLB_FLUSH + def_bool y + depends on CPU_SUP_AMD && 64BIT + menuconfig PROCESSOR_SELECT bool "Supported processor vendors" if EXPERT help @@ -431,6 +435,7 @@ config CPU_SUP_CYRIX_32 config CPU_SUP_AMD default y bool "Support AMD processors" if PROCESSOR_SELECT + select X86_BROADCAST_TLB_FLUSH help This enables detection, tunings and quirks for AMD processors diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index 17b6590748c0..f9b832e971c5 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -338,6 +338,7 @@ #define X86_FEATURE_CLZERO (13*32+ 0) /* "clzero" CLZERO instruction */ #define X86_FEATURE_IRPERF (13*32+ 1) /* "irperf" Instructions Retired Count */ #define X86_FEATURE_XSAVEERPTR (13*32+ 2) /* "xsaveerptr" Always save/restore FP error pointers */ +#define X86_FEATURE_INVLPGB (13*32+ 3) /* INVLPGB and TLBSYNC instruction supported. */ #define X86_FEATURE_RDPRU (13*32+ 4) /* "rdpru" Read processor register at user level */ #define X86_FEATURE_WBNOINVD (13*32+ 9) /* "wbnoinvd" WBNOINVD instruction */ #define X86_FEATURE_AMD_IBPB (13*32+12) /* Indirect Branch Prediction Barrier */ diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index 02fc2aa06e9e..8fe3b2dda507 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -183,6 +183,13 @@ static inline void cr4_init_shadow(void) extern unsigned long mmu_cr4_features; extern u32 *trampoline_cr4_features; +/* How many pages can we invalidate with one INVLPGB. */ +#ifdef CONFIG_X86_BROADCAST_TLB_FLUSH +extern u16 invlpgb_count_max; +#else +#define invlpgb_count_max 1 +#endif + extern void initialize_tlbstate_and_flush(void); /* diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c index 79d2e17f6582..bcf73775b4f8 100644 --- a/arch/x86/kernel/cpu/amd.c +++ b/arch/x86/kernel/cpu/amd.c @@ -29,6 +29,8 @@ #include "cpu.h" +u16 invlpgb_count_max __ro_after_init; + static inline int rdmsrl_amd_safe(unsigned msr, unsigned long long *p) { u32 gprs[8] = { 0 }; @@ -1135,6 +1137,12 @@ static void cpu_detect_tlb_amd(struct cpuinfo_x86 *c) tlb_lli_2m[ENTRIES] = eax & mask; tlb_lli_4m[ENTRIES] = tlb_lli_2m[ENTRIES] >> 1; + + /* Max number of pages INVLPGB can invalidate in one shot */ + if (boot_cpu_has(X86_FEATURE_INVLPGB)) { + cpuid(0x80000008, &eax, &ebx, &ecx, &edx); + invlpgb_count_max = (edx & 0xffff) + 1; + } } static const struct cpu_dev amd_cpu_dev = { From patchwork Mon Jan 20 02:40:13 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rik van Riel X-Patchwork-Id: 13944702 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B373C0218A for ; Mon, 20 Jan 2025 02:42:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A2301280002; Sun, 19 Jan 2025 21:42:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7384A6B008C; Sun, 19 Jan 2025 21:42:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3A81D6B0099; Sun, 19 Jan 2025 21:42:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id E09206B008C for ; Sun, 19 Jan 2025 21:42:29 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 99754A25C4 for ; Mon, 20 Jan 2025 02:42:29 +0000 (UTC) X-FDA: 83026281618.14.93EE4FA Received: from shelob.surriel.com (shelob.surriel.com [96.67.55.147]) by imf03.hostedemail.com (Postfix) with ESMTP id 17AD820004 for ; Mon, 20 Jan 2025 02:42:27 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf03.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737340948; a=rsa-sha256; cv=none; b=FEjnpKbiyGVvAc6x3NLn/4y4hXJjsWTnlDPLYnouo7iStaYoOeljm53cU7Usdb/Ciq4GCG Ms4NKTbo6X7tVB0Iw9ioGlWngN7gZzr1CLbPmcfUABtlzTKeVxeVWEf770T3i6EjAhbvzj IxKNY9znfTK1MLvocXesbw7ZDLhjCaU= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf03.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737340948; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1K8kB1I4+9t9TIYqSgJN8D2rNOhCjXKJjmT94NTU88I=; b=e77uxrbbAGlwRKN7TOfGOyAbBGxw8oH5Yt7cjkg1Q81zaOa6HfFlpS/3lLwB8xm81Ey/nT 1cHyamRYTlDj4eT2r10bjQuLC3sjMZv95KY6+9H4V20uGSWsIPQvSmOXEuBCxXyfulfdtO febzdgC57Y/xVwCd66iH9fOai+QANkQ= Received: from fangorn.home.surriel.com ([10.0.13.7]) by shelob.surriel.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.97.1) (envelope-from ) id 1tZhis-000000002w5-11sM; Sun, 19 Jan 2025 21:41:06 -0500 From: Rik van Riel To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, bp@alien8.de, peterz@infradead.org, dave.hansen@linux.intel.com, zhengqi.arch@bytedance.com, nadav.amit@gmail.com, thomas.lendacky@amd.com, kernel-team@meta.com, linux-mm@kvack.org, akpm@linux-foundation.org, jannh@google.com, mhklinux@outlook.com, andrew.cooper3@citrix.com, Rik van Riel Subject: [PATCH v6 05/12] x86/mm: add INVLPGB support code Date: Sun, 19 Jan 2025 21:40:13 -0500 Message-ID: <20250120024104.1924753-6-riel@surriel.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250120024104.1924753-1-riel@surriel.com> References: <20250120024104.1924753-1-riel@surriel.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: 17AD820004 X-Rspamd-Server: rspam10 X-Stat-Signature: 4u7crojmfqcgxtn8xjxchnb7is1hmpam X-HE-Tag: 1737340947-217789 X-HE-Meta: U2FsdGVkX19WTmpPB5u1apvF/Ql+I7UPbPGkevWEG3TX/mUZoZ6jAX/Jmxi4WHZe/SDiR1PMSzv0jhHpboRCeDtproN48uWqAID2owl2+L4cMF/U/lalVxklLfoZinarIICrhFkJxfMMAtjPIXbEI9M/OxDYwZoK38+QqJP6ZxN08dR7jp/ovzUEfq9IOmvXYaMqzvJWKo1JWj4ShTxAcfw0FG8NIGgxZcZ9k2pExwTm+6P9LBwI2+fNJOrg1+nNkQyLHVeiZL3mpFzfoXx+4/3/rNJeDtifOIHI39WPhJE24XoMnxAuFJnAN6uTPiWyvwrP0JSudIp2L/PpOI4ofIUixwrEmNl/gKguNz8yZPblnNYf9RC+cPqwbAgfbFxj9M/rLgKMr+c4pjH0r/6bvtva2qDExLC6E8cXemsMMeSRDIPujwaJgyxm7cVas+YZh1HNSonchmPCES1k79BZYFz/RpiGp4LIkN9y4gKRpdOXJws0MpRGvtvkKkgwwV7Ix4Rsq8dac94vhKDQY8YCuQOzIPuxfzSSM5+3BOUIPKTLbw8d/+J20flWRNCNnw9fNJve31weP4dXhjkt9tJI/1H4hFPdkVwfYuVpy7nabYjRQIzo6AXLrhfBlkWfJGfHfsa5GRydUmIC5B559tw31nPTxhK33w3tflMmSOtBKVOR4+bNZ9gvQC5oUxoXi7acKoMNNQNzgvR3LrBAn15jnD3TkQnaGfVF+7gmZZV8nwBSZM60qgaY/BMufsG9FdJZR9vWOjYJmAvhBmPsulhA61jQSd8Ovbi9KmiBIWQgExZjMg9GLuXbluE3v15decBMkV3b9eF52rPyCO6/JYjCcu0FuFtsi8RzIQH3i/2qcmX9rNmUAk3z3kVtSPxVfRpTrsPx83UNaCrBZEFih3FvdXfmDLdlpDuBm7sMN/MtyLzkbNVNvY6E6z6KY9DtMggBu5e9v57Buix17kCa1tr pv2luZMZ lwg3S+Ct4PZ9vX2DBLA1/x/WJPmT7QH9skrnYf79JmC4FLOMikI8Y8Jimk/rIjQyCA9gy9OgNNII1Bkx1B0N+5deDG49LwkHFus4xk+cLoa49sUtHN2gVS3vdIMcyEOD7WyGrBKJGNM8T2vt97zqTVi7GfQNAkq+Kj7ehKSfbIY2xuIckWHVm/49JZD05dxajjfUfIeU0bci3hUbElObOIM5fkxj5JW1dsvY2LzQxlsEp+r2qD3uF7w9yRILoHQjwZy/j1K9C1mpKpQtKB2usn80n4/sc2wvncYmUwRzW0qUWWyWKXB6ePksmV1iuQXY8wA7S/76pVlfAZ2gcRe63GCjE8wz3nhwW75DZMkxqqFzzbZCAeWUZ+1HtOfDv8edvyHBis48qnjdz0SY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add invlpgb.h with the helper functions and definitions needed to use broadcast TLB invalidation on AMD EPYC 3 and newer CPUs. Signed-off-by: Rik van Riel --- arch/x86/include/asm/invlpgb.h | 97 +++++++++++++++++++++++++++++++++ arch/x86/include/asm/tlbflush.h | 1 + 2 files changed, 98 insertions(+) create mode 100644 arch/x86/include/asm/invlpgb.h diff --git a/arch/x86/include/asm/invlpgb.h b/arch/x86/include/asm/invlpgb.h new file mode 100644 index 000000000000..4dfd09e65fa6 --- /dev/null +++ b/arch/x86/include/asm/invlpgb.h @@ -0,0 +1,97 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_X86_INVLPGB +#define _ASM_X86_INVLPGB + +#include +#include + +/* + * INVLPGB does broadcast TLB invalidation across all the CPUs in the system. + * + * The INVLPGB instruction is weakly ordered, and a batch of invalidations can + * be done in a parallel fashion. + * + * TLBSYNC is used to ensure that pending INVLPGB invalidations initiated from + * this CPU have completed. + */ +static inline void __invlpgb(unsigned long asid, unsigned long pcid, + unsigned long addr, u16 extra_count, + bool pmd_stride, unsigned long flags) +{ + u32 edx = (pcid << 16) | asid; + u32 ecx = (pmd_stride << 31) | extra_count; + u64 rax = addr | flags; + + /* INVLPGB; supported in binutils >= 2.36. */ + asm volatile(".byte 0x0f, 0x01, 0xfe" : : "a" (rax), "c" (ecx), "d" (edx)); +} + +/* Wait for INVLPGB originated by this CPU to complete. */ +static inline void tlbsync(void) +{ + cant_migrate(); + /* TLBSYNC: supported in binutils >= 0.36. */ + asm volatile(".byte 0x0f, 0x01, 0xff" ::: "memory"); +} + +/* + * INVLPGB can be targeted by virtual address, PCID, ASID, or any combination + * of the three. For example: + * - INVLPGB_VA | INVLPGB_INCLUDE_GLOBAL: invalidate all TLB entries at the address + * - INVLPGB_PCID: invalidate all TLB entries matching the PCID + * + * The first can be used to invalidate (kernel) mappings at a particular + * address across all processes. + * + * The latter invalidates all TLB entries matching a PCID. + */ +#define INVLPGB_VA BIT(0) +#define INVLPGB_PCID BIT(1) +#define INVLPGB_ASID BIT(2) +#define INVLPGB_INCLUDE_GLOBAL BIT(3) +#define INVLPGB_FINAL_ONLY BIT(4) +#define INVLPGB_INCLUDE_NESTED BIT(5) + +/* Flush all mappings for a given pcid and addr, not including globals. */ +static inline void invlpgb_flush_user(unsigned long pcid, + unsigned long addr) +{ + __invlpgb(0, pcid, addr, 0, 0, INVLPGB_PCID | INVLPGB_VA); + tlbsync(); +} + +static inline void invlpgb_flush_user_nr_nosync(unsigned long pcid, + unsigned long addr, + u16 nr, + bool pmd_stride) +{ + __invlpgb(0, pcid, addr, nr - 1, pmd_stride, INVLPGB_PCID | INVLPGB_VA); +} + +/* Flush all mappings for a given PCID, not including globals. */ +static inline void invlpgb_flush_single_pcid_nosync(unsigned long pcid) +{ + __invlpgb(0, pcid, 0, 0, 0, INVLPGB_PCID); +} + +/* Flush all mappings, including globals, for all PCIDs. */ +static inline void invlpgb_flush_all(void) +{ + __invlpgb(0, 0, 0, 0, 0, INVLPGB_INCLUDE_GLOBAL); + tlbsync(); +} + +/* Flush addr, including globals, for all PCIDs. */ +static inline void invlpgb_flush_addr_nosync(unsigned long addr, u16 nr) +{ + __invlpgb(0, 0, addr, nr - 1, 0, INVLPGB_INCLUDE_GLOBAL); +} + +/* Flush all mappings for all PCIDs except globals. */ +static inline void invlpgb_flush_all_nonglobals(void) +{ + __invlpgb(0, 0, 0, 0, 0, 0); + tlbsync(); +} + +#endif /* _ASM_X86_INVLPGB */ diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index 8fe3b2dda507..dba5caa4a9f4 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -10,6 +10,7 @@ #include #include #include +#include #include #include #include From patchwork Mon Jan 20 02:40:14 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rik van Riel X-Patchwork-Id: 13944698 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 344B4C02187 for ; Mon, 20 Jan 2025 02:42:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 99CBA6B0085; Sun, 19 Jan 2025 21:42:29 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 94CFE6B0088; Sun, 19 Jan 2025 21:42:29 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 83B7D6B0089; Sun, 19 Jan 2025 21:42:29 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 686BE6B0085 for ; Sun, 19 Jan 2025 21:42:29 -0500 (EST) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 1C662B0D0A for ; Mon, 20 Jan 2025 02:42:29 +0000 (UTC) X-FDA: 83026281618.29.F6614EB Received: from shelob.surriel.com (shelob.surriel.com [96.67.55.147]) by imf11.hostedemail.com (Postfix) with ESMTP id 7BF9A4000F for ; Mon, 20 Jan 2025 02:42:27 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=none; spf=pass (imf11.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737340947; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=g42cjHb2aIEThCXX+573g6QlgGX+6rdSWUpGjtlNPxo=; b=Bbp0/sigDBTr+f+CtrqSr07MXuq1Z6IqhYRRaMuzoo+aYQPrAoqEJIbaKOencqvVqzq8PW rQMEDz4Yy+fm1isjlygbMmHBz8JvGttHwszpA/uqiDLVRkHxzVXqh4cfrrVI7Y5SXVBeOh e3BeeT2JHVesDWq6q6OwOfLlQPR4qMg= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737340947; a=rsa-sha256; cv=none; b=NwYw+U27uhDK7SsVebT3ng1dLW7xRddTuRM4b1Shr5vczwIP7yiV2zPl8oJUs0V4m8XMK0 qJPU52N+8p6bJPMc4qyoSMrRG/xNextukEQ56MBGauQp5xl1D5ub/jSOdB4+2H5oX4sjzu o36SDKS293ZyCA7PdA+k3Z780e713JY= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=none; spf=pass (imf11.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com; dmarc=none Received: from fangorn.home.surriel.com ([10.0.13.7]) by shelob.surriel.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.97.1) (envelope-from ) id 1tZhis-000000002w5-17Y4; Sun, 19 Jan 2025 21:41:06 -0500 From: Rik van Riel To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, bp@alien8.de, peterz@infradead.org, dave.hansen@linux.intel.com, zhengqi.arch@bytedance.com, nadav.amit@gmail.com, thomas.lendacky@amd.com, kernel-team@meta.com, linux-mm@kvack.org, akpm@linux-foundation.org, jannh@google.com, mhklinux@outlook.com, andrew.cooper3@citrix.com, Rik van Riel Subject: [PATCH v6 06/12] x86/mm: use INVLPGB for kernel TLB flushes Date: Sun, 19 Jan 2025 21:40:14 -0500 Message-ID: <20250120024104.1924753-7-riel@surriel.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250120024104.1924753-1-riel@surriel.com> References: <20250120024104.1924753-1-riel@surriel.com> MIME-Version: 1.0 X-Stat-Signature: dqaahb4jfnr45i8weubpncsk4ywxex5z X-Rspamd-Queue-Id: 7BF9A4000F X-Rspam-User: X-Rspamd-Server: rspam06 X-HE-Tag: 1737340947-769232 X-HE-Meta: U2FsdGVkX1904r5pnw2ZRJPGPcajTe+4CC0zyuRFVfZZimaB6Hu/4GRpViCJZ13buW1gZtXz8XQv95kvOSuQR87fTs00LqDzy7FEXH5SuOmxXK2KpaD/YAIiTm6uUQ7wj3y+ZElZpi9ufaxQSGxtX04jTVaWac4oazPlz/weqUdLiWXkStiSN0D74D+/hpmJtyeD2/0b3PkaEZM1N0KiLq3thB8nrCfnmzfra5cozY/vtluqPNRL6TciUPwBeFYxDsW8oCJCfhLjN71Lr64EJUE/HGwliNhJhXpYoVUaT4mfQx8UMjVmQH5ypQJNIqABsrWeqsYQ7Y59txea48mW5E3LujhiPOrPEAfb32o2CJUNYqnOB6GCkh0suWUyw22yCnQcVbkxmdyQbxVizJceI3fCF2aOBDA+WLDmjwqfwf99pO8/nk5FNdCgl7/WN57Rmn7IsCQAGf9PykVfL1m+sliLZL0xAZLLcV3X1+gNns5Dzx9yBaj88agr1mjTnpwvnVfgXypIO66W27u/ymI0BOiSuNgwbkFqMX4xzZZow26g2kRVJPilyiPQsk2nR7cBqVv/wj98HrnBGkkluIeyF/Dixq0hxi3kqGtOTv1bkSOAdBY+aY6CMKKNQdYXfCigN1+20EEHs8ZqRH7hMYnbDX+7emPLu+PScp8YTuaaBu3BjaQSDLIWR8hsyky/j77OQxoWHsVcsUaViNq9hilB6aO/VJNscKxOW4+b64U2cMcHGdE5+kJXIxs4EaUAs4yL4UDRplwY9rTw7zGzEUXs4WLZcjYTj3nKfNYJI5+sUpSRZJwHDi1PBDvbauZpgWYcBGmbpon03eJBkanhfsJ0iHu2EKwGharw2PeHA0jTCXoMTa5GceoUlXkT5NopC4XEGzDW5uWugJ4NvIZXymN7mCMAMvktlYiJDmTaa1M9O9Cb3x42QQVQLEAH5Xk+B44SbyXuF0sL6aWcWvyRqxt n+/IbXdm uXF5w+ywq7u21F1XQCTTN33sqIn6XteVIFdnWtLfBrQOd2mv+qQ305W7oiqx2vizy63M66EBdKR/qqfGlhwxl2SaMury+Oj0xAoTrSMCw4iKPw6EWAzOtL2IT1NSDopaL78lPZSoTYR9/KPngrGJZTWdLuF8Fr3W+eQc253M4aTjiqn6MgOCRw+Psrh7zsF/yHqg3ID5gvZjnxO79lLeqmF0WCLYkQdYuMrsnrsxe/GJgFa9Usp3YcM+5CG/+FTWBkQ5V/TJQbackZ75AU1pKLvBUAZBZWEHhdIkywDb5a7k8FLm6/LBkGHwKuz0Sn7LUv15Vt5bOybtR4VVF5CrFSB4kDe608zFbeXBykX+X1Y2xVbKsgw/eYuvmHex4XvEoEDGL X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Use broadcast TLB invalidation for kernel addresses when available. Remove the need to send IPIs for kernel TLB flushes. Signed-off-by: Rik van Riel --- arch/x86/mm/tlb.c | 28 +++++++++++++++++++++++++++- 1 file changed, 27 insertions(+), 1 deletion(-) diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 4c2feb7259b1..2c9e9b7482dd 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -1077,6 +1077,30 @@ void flush_tlb_all(void) on_each_cpu(do_flush_tlb_all, NULL, 1); } +static bool broadcast_kernel_range_flush(struct flush_tlb_info *info) +{ + unsigned long addr; + unsigned long nr; + + if (!IS_ENABLED(CONFIG_X86_BROADCAST_TLB_FLUSH)) + return false; + + if (!cpu_feature_enabled(X86_FEATURE_INVLPGB)) + return false; + + if (info->end == TLB_FLUSH_ALL) { + invlpgb_flush_all(); + return true; + } + + for (addr = info->start; addr < info->end; addr += nr << PAGE_SHIFT) { + nr = min((info->end - addr) >> PAGE_SHIFT, invlpgb_count_max); + invlpgb_flush_addr_nosync(addr, nr); + } + tlbsync(); + return true; +} + static void do_kernel_range_flush(void *info) { struct flush_tlb_info *f = info; @@ -1096,7 +1120,9 @@ void flush_tlb_kernel_range(unsigned long start, unsigned long end) info = get_flush_tlb_info(NULL, start, end, PAGE_SHIFT, false, TLB_GENERATION_INVALID); - if (info->end == TLB_FLUSH_ALL) + if (broadcast_kernel_range_flush(info)) + ; /* Fall through. */ + else if (info->end == TLB_FLUSH_ALL) on_each_cpu(do_flush_tlb_all, NULL, 1); else on_each_cpu(do_kernel_range_flush, info, 1); From patchwork Mon Jan 20 02:40:15 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rik van Riel X-Patchwork-Id: 13944701 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41419C02188 for ; Mon, 20 Jan 2025 02:42:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 737C66B008A; Sun, 19 Jan 2025 21:42:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 531AD280002; Sun, 19 Jan 2025 21:42:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 222516B0093; Sun, 19 Jan 2025 21:42:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id D9BDE6B0089 for ; Sun, 19 Jan 2025 21:42:29 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 699CC1A26F7 for ; Mon, 20 Jan 2025 02:42:29 +0000 (UTC) X-FDA: 83026281618.01.55C71A4 Received: from shelob.surriel.com (shelob.surriel.com [96.67.55.147]) by imf23.hostedemail.com (Postfix) with ESMTP id E363014000A for ; Mon, 20 Jan 2025 02:42:26 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf23.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737340947; a=rsa-sha256; cv=none; b=xbVfoDxzcCD2wrm5btW6FXXlZhhHrli5Gjis/Bgwvclib3o8W+tcQaDTFnauB5ICc64YzL V5XNz8RTqEwoWRJu96jNEIde5ScGdtI16krs3wSXOZH0snIIsJ0tEFvS7c9y4HB8fCe/Sw ie0ocQG6FlPag3QaPVOZwcgc2cBp5cI= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf23.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737340947; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=FUVHLo8ZQw9HuI2d8HGefUGZmMCOMsVfRQUVMierrfw=; b=GIzdzQgB8DzaYY+YaQZkWyPqLupCAC5A0M0CdbMzlp5grE/zZxChm3VqZVC17QibFMeYtL wZ0jQHgeFEHgj1h5uiji5MF6N18w+O1ykMlLl+SuAkS9GJkf8hEo1/gEnHYq7BWfo2WP++ 2/6lBE6vCtL7IAz8W2Po+DDsxTnQrJg= Received: from fangorn.home.surriel.com ([10.0.13.7]) by shelob.surriel.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.97.1) (envelope-from ) id 1tZhis-000000002w5-1D2R; Sun, 19 Jan 2025 21:41:06 -0500 From: Rik van Riel To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, bp@alien8.de, peterz@infradead.org, dave.hansen@linux.intel.com, zhengqi.arch@bytedance.com, nadav.amit@gmail.com, thomas.lendacky@amd.com, kernel-team@meta.com, linux-mm@kvack.org, akpm@linux-foundation.org, jannh@google.com, mhklinux@outlook.com, andrew.cooper3@citrix.com, Rik van Riel Subject: [PATCH v6 07/12] x86/tlb: use INVLPGB in flush_tlb_all Date: Sun, 19 Jan 2025 21:40:15 -0500 Message-ID: <20250120024104.1924753-8-riel@surriel.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250120024104.1924753-1-riel@surriel.com> References: <20250120024104.1924753-1-riel@surriel.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: E363014000A X-Rspamd-Server: rspam10 X-Stat-Signature: 5myq85secwt8rbmzh6513bowum1yaf5h X-HE-Tag: 1737340946-626300 X-HE-Meta: U2FsdGVkX1+QK2+qMJIow8yMBTO0zcnU6g0ZQrt0BfB4LmWB0LF926PItpTZkwgjgzVxXjaKUMlJkls6pZFbeCE8YiEcOwB8ADKMejt4sa7mfI9SCdYMFwO59roVYT9te6T5xhOb+I7GLIORM13FAsxIb/wEV2NhlVuWJ/h8gaxzOq7BbHQSKr2A9iNMaH1va/XShZwzO/yvNjdYziWSNafWUuOqelP9ZLOggv1Us97OQYtRyJslQgiVlIK4UN3qwCvai2l3EtOYjMdwzAy+IkE1nRJO+yYJjyKI/F9c9KRDPBJ+T98GzrGHSsW25QW3sDxPiIsx0GqjA+aVFrT4RqMC6KHOYvNwlCk9MC/zZDFnXdGdIr2eYakWNQOy5N1uHQb6QiH6ssqkrhLsNcJLYxzNzChvHJBHFLABd3oZce30OWBE71MlGkvyFqyt9CyplPVEEu4ZQ+kLTzybxfHbTszwW4mQ5FsU5XvhumSBZf5m5VyXSrxJXP4aogn3iyYN76pOtVFNlow3VRIrhwRghGSxMb/z2At9C4WKbfwgqPHsxy9MlWXPAUOt/r5Vq5hjBhW7NWSB7uuA48bpS0m5Njw/5xHJuUujr9HTQwepJBq6hqYzScRPxfgsFHSAFbJCv5j7cVwYWeU49wzplREu2HqyIbLbq9uiTfTkFH+KC6zildQyrMi9eHW9p5V0OFzM53IMHhuF1eXMNT02BtTpKPvqrdg2AQ1ikwq/Njr8r3Ez3SWKBelLzIAdCGXi7z+FC4ODioQ2h9fZFlzVX/0j+eAZ9snPzpUHBlmdbDr2SJzLl7TdxMk/nEm9ho/Lv2GW06YH9zO+1HWeKCX7FuHn4IqH/YLtCUNEF6eufoPiGUUIr6U/mapcO8ZSvrHH8nD5innYIvy0GngWhHIzjiNdiI4KvSRaHwK8RhPcusCVMT2UKXBWm1Cl2vm/Ho4kD0Bp36PSf4Hum+KPjFFPS2Y wad5byo+ COBtUhVaegFOyr6Mmy0CjHrAQtf3yBauwQhkezrM8R6j9BR0CpiozRzSVZkozVqYlcrE/QvPQOLqOzOnYXPwD0lPySRhx8xrksDWjWUuQMvfq0tUr1Vsd8z01YdsVg+FhnkcLn2KhlAgsCl2Yy1CVDSQIiAR2rKMcJsPIfqPCImu40Turwef9pAS6zHt/tijEXNRNUE+ru9HpXclGIUK0LHSZE1nde6QMd54fWjfVwkR2xeZDqgpzKfU5d5fSM7/Zw/VUUsqzYEt8XCUOS3lWKIozyLvVTUFJB7QHBsJGMvCz4P2Bj3/2a1m5rZHsPtrcyiG5vhtRJ7ViHlM8rvEhbQqtEb3QPQfltrpm8NQ5PMXOpj/BegrZFicdKGqfmlvaAlCJGUgazMMFAvY6nje7Z0YgCbwroDXIztcGvXl38oSL6Ui9GKO9+iazig== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The flush_tlb_all() function is not used a whole lot, but we might as well use broadcast TLB flushing there, too. Signed-off-by: Rik van Riel --- arch/x86/mm/tlb.c | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 2c9e9b7482dd..e2a0b7fc5fed 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -1065,6 +1065,19 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, } +static bool broadcast_flush_tlb_all(void) +{ + if (!IS_ENABLED(CONFIG_X86_BROADCAST_TLB_FLUSH)) + return false; + + if (!cpu_feature_enabled(X86_FEATURE_INVLPGB)) + return false; + + guard(preempt)(); + invlpgb_flush_all(); + return true; +} + static void do_flush_tlb_all(void *info) { count_vm_tlb_event(NR_TLB_REMOTE_FLUSH_RECEIVED); @@ -1073,6 +1086,8 @@ static void do_flush_tlb_all(void *info) void flush_tlb_all(void) { + if (broadcast_flush_tlb_all()) + return; count_vm_tlb_event(NR_TLB_REMOTE_FLUSH); on_each_cpu(do_flush_tlb_all, NULL, 1); } From patchwork Mon Jan 20 02:40:16 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rik van Riel X-Patchwork-Id: 13944700 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6786C02187 for ; Mon, 20 Jan 2025 02:42:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3CDAD280001; Sun, 19 Jan 2025 21:42:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1C0A86B0098; Sun, 19 Jan 2025 21:42:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F41316B0093; Sun, 19 Jan 2025 21:42:29 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id C67D26B0088 for ; Sun, 19 Jan 2025 21:42:29 -0500 (EST) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 7B97EC2643 for ; Mon, 20 Jan 2025 02:42:29 +0000 (UTC) X-FDA: 83026281618.24.E6DB113 Received: from shelob.surriel.com (shelob.surriel.com [96.67.55.147]) by imf20.hostedemail.com (Postfix) with ESMTP id E8FBF1C0006 for ; Mon, 20 Jan 2025 02:42:27 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf20.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737340948; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=hYZKHKN33BrkcN2hT6qEtBNKXUt3RzEMGWzOtIbwDXY=; b=GbVwi0PNlt69jFLWfkf7kxsMSsDTFQMxoFJXWqv0t7KSBttfey9ZPzws3kFv27GEx73BCx p7rlHwRsU5Mh5QhRFSD1jslq2+b6ubrY6Cls4kW9o9UYDC9Sw6XdWWBqr5vl7w5WblM0zG Sz7ZmG2GNvKrs0TGEFJkT/JDJ/U8Yf8= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737340948; a=rsa-sha256; cv=none; b=jL8/1Kv+UBPgMt8DSfXlOEzmpiMDNTXYlMsbIws5SKGzpc1dsiUaurBthP3ltsYTo2n7yV AUvfsx4bZ/zz037fNcWvQG925VCA7nrAWD8FQzfosvP9/qFIkhZQA/mCKtnDdA+CUdE3Qm RanGC+FYwV1UL3yul3hYZo3Yu3xmF8g= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf20.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com Received: from fangorn.home.surriel.com ([10.0.13.7]) by shelob.surriel.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.97.1) (envelope-from ) id 1tZhis-000000002w5-1ISQ; Sun, 19 Jan 2025 21:41:06 -0500 From: Rik van Riel To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, bp@alien8.de, peterz@infradead.org, dave.hansen@linux.intel.com, zhengqi.arch@bytedance.com, nadav.amit@gmail.com, thomas.lendacky@amd.com, kernel-team@meta.com, linux-mm@kvack.org, akpm@linux-foundation.org, jannh@google.com, mhklinux@outlook.com, andrew.cooper3@citrix.com, Rik van Riel Subject: [PATCH v6 08/12] x86/mm: use broadcast TLB flushing for page reclaim TLB flushing Date: Sun, 19 Jan 2025 21:40:16 -0500 Message-ID: <20250120024104.1924753-9-riel@surriel.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250120024104.1924753-1-riel@surriel.com> References: <20250120024104.1924753-1-riel@surriel.com> MIME-Version: 1.0 X-Stat-Signature: 4t1rocj7hsfufomsen7ath65ei6trrd7 X-Rspam-User: X-Rspamd-Queue-Id: E8FBF1C0006 X-Rspamd-Server: rspam03 X-HE-Tag: 1737340947-264176 X-HE-Meta: U2FsdGVkX1+gDpZOo+o0u6Am7y0eKcDaBUb3Dq6C0uq/J4ov+IPOBYII5/OCpKZMTMbosK73ajPQ2tFOCd+mlhh2fIsNu9X2N4je5Wwpx4jGw2KKLk1D28sVaio/giHcI6q7sBGy6Ea+6ZeOp2pl+lRb4YU2nI+R19gH+UJ7mmv0K72kVuSXrXMfnFP/fWDjDmRriuyESq6SMJZ5bnY+EFIBE5tLclScVnRqMH2Pp3YWwF79j3ODW2Ig2izALAipl7WoaQLGac9ZPaXD4Rhda2zkERSps4Tx719UMjqIFDvUEK4/MXkg2KJhx0vj/vygQXU5uS6OFURgzzOx0tSO7VETdR7WF8Sxgdv0if8wXRtnc3BHfbio2kfDXqZgqM0oP9yGTCJZz+sDbpldbn73ukyFLi8qLAkvVS/8A3Npl5zesHnHJeAf6rHhixhSuNiBLSEiAW4WUbnHq/EUaXDvEQQIrxTkErPnmXdNbB+1YCw1lJzVfXGZSSjG80/3H4onjYm1tPszCngNM+05qU4OXdkx01az3+T+CeUUT7fWDnHafJXqbbQaB4G13J01bBKA/rO5smCY6Qt67uf97tFulVAiCcP/RNuQHUocoBkwrom41iQsEnIySHvWyF0Gxc7omYIqiuNrYnroFFV17YpBF9aHBPhhKvYWekVCxh/CyNk7oQivjl3jjTeaTp0gaznqrSlyRtKGEeTqFipakPuBLivukkhkAgn51E5kILJQPrGoPgbWFQe5iasZjdQmJtDjBWfQBv5h55rSVRSGsuLO5NeFFx5pBd65vRtGBP3F9IH16yT+LcLgEIcdXATWn6DUvh1zUXGyt7C/izBtCKMJBJ0gD6kg7h4zWcK0EdGmFQqe6y1/0wx83yiQVhnsdGMlMlIOPv2RfAPJpXFdhxO12bsyPVmUud3vnXacTND3TWvdR+sK8Nq4tE3nmOuZ9YvV3gHvvqyGdGjqOVLczxv Qg9K3EZc 37etcnjJKuA6oVfwAGt8NTSM0QyZr7P+h7LnsfXrKdYaegxReaslQc0uIETtjnztkHQL2jSe3pxSni9n2EIPxIvBOCCXXuJwr4n4MzdXrJjhgWoyKxwEBeFAVSTxJsII6UbwZHh/ZRtEExEOFfDcE8tXPMzIdDCzOwTHu16xKjhdDPUAiIMP5nmQr3h8BWd/tv/xUu3KBC89YcSQH5EIKFFIB4UPAuowouIMMki727ugbGhI+K2NgNoZ56CNtROuNz7GfiEPObJr9aMx2jR97vqbS8V/UDLwXwROBEOSj20OSzZ1NTWNgIOGcJIZyw8r1Uwmnl3eGeCUE3F9y/ayQNQxlzrL+miCxo8tLcrSz9GdQfGESLMD8YIdPi+Sbpv0MldiDSvL+n7crPCwAEZ+n0x5c8VIlWiiPZi948R8w+HOEF8w= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: In the page reclaim code, we only track the CPU(s) where the TLB needs to be flushed, rather than all the individual mappings that may be getting invalidated. Use broadcast TLB flushing when that is available. Signed-off-by: Rik van Riel --- arch/x86/mm/tlb.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index e2a0b7fc5fed..9d4864db5720 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -1321,7 +1321,9 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch) * a local TLB flush is needed. Optimize this use-case by calling * flush_tlb_func_local() directly in this case. */ - if (cpumask_any_but(&batch->cpumask, cpu) < nr_cpu_ids) { + if (cpu_feature_enabled(X86_FEATURE_INVLPGB)) { + invlpgb_flush_all_nonglobals(); + } else if (cpumask_any_but(&batch->cpumask, cpu) < nr_cpu_ids) { flush_tlb_multi(&batch->cpumask, info); } else if (cpumask_test_cpu(cpu, &batch->cpumask)) { lockdep_assert_irqs_enabled(); From patchwork Mon Jan 20 02:40:17 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rik van Riel X-Patchwork-Id: 13944710 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD176C0218A for ; Mon, 20 Jan 2025 02:42:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A6725280006; Sun, 19 Jan 2025 21:42:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 99CCB280003; Sun, 19 Jan 2025 21:42:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7EF56280006; Sun, 19 Jan 2025 21:42:44 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 5BD4A280003 for ; Sun, 19 Jan 2025 21:42:44 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id F2B3E8267A for ; Mon, 20 Jan 2025 02:42:43 +0000 (UTC) X-FDA: 83026282206.17.FA8DA9C Received: from shelob.surriel.com (shelob.surriel.com [96.67.55.147]) by imf04.hostedemail.com (Postfix) with ESMTP id 6169740002 for ; Mon, 20 Jan 2025 02:42:42 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf04.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737340962; a=rsa-sha256; cv=none; b=8ho3qUDKnT8kbNuixMoVmQSU0oztj5mo0NYfH5UO9KOUjTQYulgsavnuX7segEz4y91CHm Zd52DU8s7SQAesleWeo02wrf/Dahl0PBSVtF7hUuMqWtayowFBmFAXRqV/HQl6Z5/Jnobv uY+1AkMRCO3byoZ41LeQc6LydzTtUPM= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf04.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737340962; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rMzaBKxvwvqnbNw7r81sWOGLmqQs94Sk5Tpnygbq5Mg=; b=tRlZzaphC/4AkUzgmLPAz7WxgTPRdFUJxQe+Neme0TEu/ZHnhfrS8/D9MLH7LEod8rRY8C iv99rzfRWPv/Edu9NbnqwnxqwOCub/uvizc9DiGa/clCIg1PQ1wlWlWZKOVsDwvJVnzAsv hGH07cI7IfSQc/GIoJ0ztgr47f9WvvE= Received: from fangorn.home.surriel.com ([10.0.13.7]) by shelob.surriel.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.97.1) (envelope-from ) id 1tZhis-000000002w5-1ORZ; Sun, 19 Jan 2025 21:41:06 -0500 From: Rik van Riel To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, bp@alien8.de, peterz@infradead.org, dave.hansen@linux.intel.com, zhengqi.arch@bytedance.com, nadav.amit@gmail.com, thomas.lendacky@amd.com, kernel-team@meta.com, linux-mm@kvack.org, akpm@linux-foundation.org, jannh@google.com, mhklinux@outlook.com, andrew.cooper3@citrix.com, Rik van Riel Subject: [PATCH v6 09/12] x86/mm: enable broadcast TLB invalidation for multi-threaded processes Date: Sun, 19 Jan 2025 21:40:17 -0500 Message-ID: <20250120024104.1924753-10-riel@surriel.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250120024104.1924753-1-riel@surriel.com> References: <20250120024104.1924753-1-riel@surriel.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 6169740002 X-Stat-Signature: pnjr645m5nx1m7wj6kt6ip58sh6o8a9u X-HE-Tag: 1737340962-916451 X-HE-Meta: U2FsdGVkX18qhr2YlEGlGELs73UlED7U4ejbhKjtb9+EC9HDYoAFuD16E4jda1jFpF/SP+Q8sPUS4dNCOllK2TO9BrvdQYQ1kWfMsL3pLB36Q/0GmAdkFP4jQuuIYFUmk4xTrreFwMysMCPnp3gIBHPl2wXm29hACc8leRTgjzbWw0YI9QH/GMbrQ1K5xmiIYTUdSbnng8AmaVO7WFr+1vBuXICF7KXhOBzt3WwijyEBNPy1oAB7M48fEtn0fATANna3kiS5shO0ihxlH7qGy5Dj7WYdS1rrlpWiKiPf17nK7t8+sT/Jes2C6MCHsEPABn/9E3hCakps3sUFeM5U+B01o+3Us591O+qtClrMC3cFmhDSSZ9TAUPVzwgK8pl+ghBI9HIB2SH+InDN9vpWmevId0SPR0D/wfE6C77xx4LJFmvc0aO0kPcGM99oqYzgkkz96aTKBssqynPwhOlYyDypvVUY7w9vwl6oJ+Kmcw/et7ne9B7YVfnO/acS+kkF9jGa33jHdsOKqgw/u8W6XNb1HUgWR+Ockglk5SRWeNGxluel2rGpYcaonLgDtgdw/qfpOxUba7NvAzy5guqHxGyPUnlqNdCAvQFR6cAc+x2zUnIPckvky20wJYRUNG2XbwL2Jx1ZlyKwsnEBSAF7pOWOEKi7ppflckPfU9TPVZk4l7UB+WWrodNj27LDFM99lMyUmxpgAQIQFPB+11ULcjYTLhC5bckgO8QUnkn+j8S5TqUQ3ehOFndWjVJ9BmTxUbEhyWL98cUFsEwkAhovJhH58lYcvAa1mAVErE8kKWdqtTmvQx5EDdnmj/UjgGJnbxtSnoYUlY/KxZfaVrgfvQQIVHETLFRF7VFq9W3IQ3YIuXl0dwr2Mjpqb6o/e44ag7P6s8Q6UkJgI+vKLiE5XXv05MVS5xkJ8ngkxV5E+7GZ3zrdBU0lBQiUgwN+4KaCZxz6wGAmAPv+vSpUsIx MCuzPUX0 FjBJWceRnjFU6B1vejtn6P/b3rk4/HeWFrYKJTi3EIj85haDpYWYzZ8FKof0q2G2K16m2Kz9E56MtKWBHI6RKG9ubxAt0kZ68OcEdRnxskp2LlsRYBNRmeHsUQ8ijQJ+yt/qHUYibQKRdATFgQkVec86S/9xrOZY8D9WIu+KhpY5xfSiPDbPVx02dOitS+sCz468uV+N2CjxzXyIPFiW6rZDOe6V0je6VYFUJxTGhr0Y0GTBvu1GDefhUSuMfVujyknVaeGisG3bNjhh3X1Y9BPSGM3ozutACF9gbbsDD0hVHw9bx4besMlN9eUY196foI7GjLt5zSv6n6oXdUt7VKK5F37gxLKMMlMZF0u/giE0LPfpYYaEjJDlmoVCHnrCM7uQM X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Use broadcast TLB invalidation, using the INVPLGB instruction, on AMD EPYC 3 and newer CPUs. In order to not exhaust PCID space, and keep TLB flushes local for single threaded processes, we only hand out broadcast ASIDs to processes active on 3 or more CPUs, and gradually increase the threshold as broadcast ASID space is depleted. Signed-off-by: Rik van Riel --- arch/x86/include/asm/mmu.h | 6 + arch/x86/include/asm/mmu_context.h | 14 ++ arch/x86/include/asm/tlbflush.h | 72 ++++++ arch/x86/mm/tlb.c | 362 ++++++++++++++++++++++++++++- 4 files changed, 442 insertions(+), 12 deletions(-) diff --git a/arch/x86/include/asm/mmu.h b/arch/x86/include/asm/mmu.h index 3b496cdcb74b..d71cd599fec4 100644 --- a/arch/x86/include/asm/mmu.h +++ b/arch/x86/include/asm/mmu.h @@ -69,6 +69,12 @@ typedef struct { u16 pkey_allocation_map; s16 execute_only_pkey; #endif + +#ifdef CONFIG_X86_BROADCAST_TLB_FLUSH + u16 global_asid; + bool asid_transition; +#endif + } mm_context_t; #define INIT_MM_CONTEXT(mm) \ diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h index 795fdd53bd0a..d670699d32c2 100644 --- a/arch/x86/include/asm/mmu_context.h +++ b/arch/x86/include/asm/mmu_context.h @@ -139,6 +139,8 @@ static inline void mm_reset_untag_mask(struct mm_struct *mm) #define enter_lazy_tlb enter_lazy_tlb extern void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk); +extern void destroy_context_free_global_asid(struct mm_struct *mm); + /* * Init a new mm. Used on mm copies, like at fork() * and on mm's that are brand-new, like at execve(). @@ -161,6 +163,14 @@ static inline int init_new_context(struct task_struct *tsk, mm->context.execute_only_pkey = -1; } #endif + +#ifdef CONFIG_X86_BROADCAST_TLB_FLUSH + if (cpu_feature_enabled(X86_FEATURE_INVLPGB)) { + mm->context.global_asid = 0; + mm->context.asid_transition = false; + } +#endif + mm_reset_untag_mask(mm); init_new_context_ldt(mm); return 0; @@ -170,6 +180,10 @@ static inline int init_new_context(struct task_struct *tsk, static inline void destroy_context(struct mm_struct *mm) { destroy_context_ldt(mm); +#ifdef CONFIG_X86_BROADCAST_TLB_FLUSH + if (cpu_feature_enabled(X86_FEATURE_INVLPGB)) + destroy_context_free_global_asid(mm); +#endif } extern void switch_mm(struct mm_struct *prev, struct mm_struct *next, diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index dba5caa4a9f4..5eae5c1aafa5 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -239,6 +239,78 @@ void flush_tlb_one_kernel(unsigned long addr); void flush_tlb_multi(const struct cpumask *cpumask, const struct flush_tlb_info *info); +#ifdef CONFIG_X86_BROADCAST_TLB_FLUSH +static inline bool is_dyn_asid(u16 asid) +{ + if (!cpu_feature_enabled(X86_FEATURE_INVLPGB)) + return true; + + return asid < TLB_NR_DYN_ASIDS; +} + +static inline bool is_global_asid(u16 asid) +{ + return !is_dyn_asid(asid); +} + +static inline bool in_asid_transition(const struct flush_tlb_info *info) +{ + if (!cpu_feature_enabled(X86_FEATURE_INVLPGB)) + return false; + + return info->mm && READ_ONCE(info->mm->context.asid_transition); +} + +static inline u16 mm_global_asid(struct mm_struct *mm) +{ + u16 asid; + + if (!cpu_feature_enabled(X86_FEATURE_INVLPGB)) + return 0; + + asid = READ_ONCE(mm->context.global_asid); + + /* mm->context.global_asid is either 0, or a global ASID */ + VM_WARN_ON_ONCE(is_dyn_asid(asid)); + + return asid; +} +#else +static inline bool is_dyn_asid(u16 asid) +{ + return true; +} + +static inline bool is_global_asid(u16 asid) +{ + return false; +} + +static inline bool in_asid_transition(const struct flush_tlb_info *info) +{ + return false; +} + +static inline u16 mm_global_asid(struct mm_struct *mm) +{ + return 0; +} + +static inline bool needs_global_asid_reload(struct mm_struct *next, u16 prev_asid) +{ + return false; +} + +static inline void broadcast_tlb_flush(struct flush_tlb_info *info) +{ + VM_WARN_ON_ONCE(1); +} + +static inline void consider_global_asid(struct mm_struct *mm) +{ +} +#endif + #ifdef CONFIG_PARAVIRT #include #endif diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 9d4864db5720..08eee1f8573a 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -74,13 +74,15 @@ * use different names for each of them: * * ASID - [0, TLB_NR_DYN_ASIDS-1] - * the canonical identifier for an mm + * the canonical identifier for an mm, dynamically allocated on each CPU + * [TLB_NR_DYN_ASIDS, MAX_ASID_AVAILABLE-1] + * the canonical, global identifier for an mm, identical across all CPUs * - * kPCID - [1, TLB_NR_DYN_ASIDS] + * kPCID - [1, MAX_ASID_AVAILABLE] * the value we write into the PCID part of CR3; corresponds to the * ASID+1, because PCID 0 is special. * - * uPCID - [2048 + 1, 2048 + TLB_NR_DYN_ASIDS] + * uPCID - [2048 + 1, 2048 + MAX_ASID_AVAILABLE] * for KPTI each mm has two address spaces and thus needs two * PCID values, but we can still do with a single ASID denomination * for each mm. Corresponds to kPCID + 2048. @@ -225,6 +227,20 @@ static void choose_new_asid(struct mm_struct *next, u64 next_tlb_gen, return; } + /* + * TLB consistency for global ASIDs is maintained with broadcast TLB + * flushing. The TLB is never outdated, and does not need flushing. + */ + if (IS_ENABLED(CONFIG_X86_BROADCAST_TLB_FLUSH) && static_cpu_has(X86_FEATURE_INVLPGB)) { + u16 global_asid = mm_global_asid(next); + + if (global_asid) { + *new_asid = global_asid; + *need_flush = false; + return; + } + } + if (this_cpu_read(cpu_tlbstate.invalidate_other)) clear_asid_other(); @@ -251,6 +267,290 @@ static void choose_new_asid(struct mm_struct *next, u64 next_tlb_gen, *need_flush = true; } +#ifdef CONFIG_X86_BROADCAST_TLB_FLUSH +/* + * Logic for broadcast TLB invalidation. + */ +static DEFINE_RAW_SPINLOCK(global_asid_lock); +static u16 last_global_asid = MAX_ASID_AVAILABLE; +static DECLARE_BITMAP(global_asid_used, MAX_ASID_AVAILABLE) = { 0 }; +static DECLARE_BITMAP(global_asid_freed, MAX_ASID_AVAILABLE) = { 0 }; +static int global_asid_available = MAX_ASID_AVAILABLE - TLB_NR_DYN_ASIDS - 1; + +static void reset_global_asid_space(void) +{ + lockdep_assert_held(&global_asid_lock); + + /* + * A global TLB flush guarantees that any stale entries from + * previously freed global ASIDs get flushed from the TLB + * everywhere, making these global ASIDs safe to reuse. + */ + invlpgb_flush_all_nonglobals(); + + /* + * Clear all the previously freed global ASIDs from the + * broadcast_asid_used bitmap, now that the global TLB flush + * has made them actually available for re-use. + */ + bitmap_andnot(global_asid_used, global_asid_used, + global_asid_freed, MAX_ASID_AVAILABLE); + bitmap_clear(global_asid_freed, 0, MAX_ASID_AVAILABLE); + + /* + * ASIDs 0-TLB_NR_DYN_ASIDS are used for CPU-local ASID + * assignments, for tasks doing IPI based TLB shootdowns. + * Restart the search from the start of the global ASID space. + */ + last_global_asid = TLB_NR_DYN_ASIDS; +} + +static u16 get_global_asid(void) +{ + lockdep_assert_held(&global_asid_lock); + + do { + u16 start = last_global_asid; + u16 asid = find_next_zero_bit(global_asid_used, MAX_ASID_AVAILABLE, start); + + if (asid >= MAX_ASID_AVAILABLE) { + reset_global_asid_space(); + continue; + } + + /* Claim this global ASID. */ + __set_bit(asid, global_asid_used); + last_global_asid = asid; + global_asid_available--; + return asid; + } while (1); +} + +/* + * Returns true if the mm is transitioning from a CPU-local ASID to a global + * (INVLPGB) ASID, or the other way around. + */ +static bool needs_global_asid_reload(struct mm_struct *next, u16 prev_asid) +{ + u16 global_asid = mm_global_asid(next); + + if (global_asid && prev_asid != global_asid) + return true; + + if (!global_asid && is_global_asid(prev_asid)) + return true; + + return false; +} + +void destroy_context_free_global_asid(struct mm_struct *mm) +{ + if (!mm->context.global_asid) + return; + + guard(raw_spinlock_irqsave)(&global_asid_lock); + + /* The global ASID can be re-used only after flush at wrap-around. */ + __set_bit(mm->context.global_asid, global_asid_freed); + + mm->context.global_asid = 0; + global_asid_available++; +} + +/* + * Check whether a process is currently active on more than "threshold" CPUs. + * This is a cheap estimation on whether or not it may make sense to assign + * a global ASID to this process, and use broadcast TLB invalidation. + */ +static bool mm_active_cpus_exceeds(struct mm_struct *mm, int threshold) +{ + int count = 0; + int cpu; + + /* This quick check should eliminate most single threaded programs. */ + if (cpumask_weight(mm_cpumask(mm)) <= threshold) + return false; + + /* Slower check to make sure. */ + for_each_cpu(cpu, mm_cpumask(mm)) { + /* Skip the CPUs that aren't really running this process. */ + if (per_cpu(cpu_tlbstate.loaded_mm, cpu) != mm) + continue; + + if (per_cpu(cpu_tlbstate_shared.is_lazy, cpu)) + continue; + + if (++count > threshold) + return true; + } + return false; +} + +/* + * Assign a global ASID to the current process, protecting against + * races between multiple threads in the process. + */ +static void use_global_asid(struct mm_struct *mm) +{ + guard(raw_spinlock_irqsave)(&global_asid_lock); + + /* This process is already using broadcast TLB invalidation. */ + if (mm->context.global_asid) + return; + + /* The last global ASID was consumed while waiting for the lock. */ + if (!global_asid_available) + return; + + /* + * The transition from IPI TLB flushing, with a dynamic ASID, + * and broadcast TLB flushing, using a global ASID, uses memory + * ordering for synchronization. + * + * While the process has threads still using a dynamic ASID, + * TLB invalidation IPIs continue to get sent. + * + * This code sets asid_transition first, before assigning the + * global ASID. + * + * The TLB flush code will only verify the ASID transition + * after it has seen the new global ASID for the process. + */ + WRITE_ONCE(mm->context.asid_transition, true); + WRITE_ONCE(mm->context.global_asid, get_global_asid()); +} + +/* + * Figure out whether to assign a global ASID to a process. + * We vary the threshold by how empty or full global ASID space is. + * 1/4 full: >= 4 active threads + * 1/2 full: >= 8 active threads + * 3/4 full: >= 16 active threads + * 7/8 full: >= 32 active threads + * etc + * + * This way we should never exhaust the global ASID space, even on very + * large systems, and the processes with the largest number of active + * threads should be able to use broadcast TLB invalidation. + */ +#define HALFFULL_THRESHOLD 8 +static bool meets_global_asid_threshold(struct mm_struct *mm) +{ + int avail = global_asid_available; + int threshold = HALFFULL_THRESHOLD; + + if (!avail) + return false; + + if (avail > MAX_ASID_AVAILABLE * 3 / 4) { + threshold = HALFFULL_THRESHOLD / 4; + } else if (avail > MAX_ASID_AVAILABLE / 2) { + threshold = HALFFULL_THRESHOLD / 2; + } else if (avail < MAX_ASID_AVAILABLE / 3) { + do { + avail *= 2; + threshold *= 2; + } while ((avail + threshold) < MAX_ASID_AVAILABLE / 2); + } + + return mm_active_cpus_exceeds(mm, threshold); +} + +static void consider_global_asid(struct mm_struct *mm) +{ + if (!static_cpu_has(X86_FEATURE_INVLPGB)) + return; + + /* Check every once in a while. */ + if ((current->pid & 0x1f) != (jiffies & 0x1f)) + return; + + if (meets_global_asid_threshold(mm)) + use_global_asid(mm); +} + +static void finish_asid_transition(struct flush_tlb_info *info) +{ + struct mm_struct *mm = info->mm; + int bc_asid = mm_global_asid(mm); + int cpu; + + if (!READ_ONCE(mm->context.asid_transition)) + return; + + for_each_cpu(cpu, mm_cpumask(mm)) { + /* + * The remote CPU is context switching. Wait for that to + * finish, to catch the unlikely case of it switching to + * the target mm with an out of date ASID. + */ + while (READ_ONCE(per_cpu(cpu_tlbstate.loaded_mm, cpu)) == LOADED_MM_SWITCHING) + cpu_relax(); + + if (READ_ONCE(per_cpu(cpu_tlbstate.loaded_mm, cpu)) != mm) + continue; + + /* + * If at least one CPU is not using the global ASID yet, + * send a TLB flush IPI. The IPI should cause stragglers + * to transition soon. + * + * This can race with the CPU switching to another task; + * that results in a (harmless) extra IPI. + */ + if (READ_ONCE(per_cpu(cpu_tlbstate.loaded_mm_asid, cpu)) != bc_asid) { + flush_tlb_multi(mm_cpumask(info->mm), info); + return; + } + } + + /* All the CPUs running this process are using the global ASID. */ + WRITE_ONCE(mm->context.asid_transition, false); +} + +static void broadcast_tlb_flush(struct flush_tlb_info *info) +{ + bool pmd = info->stride_shift == PMD_SHIFT; + unsigned long maxnr = invlpgb_count_max; + unsigned long asid = info->mm->context.global_asid; + unsigned long addr = info->start; + unsigned long nr; + + /* Flushing multiple pages at once is not supported with 1GB pages. */ + if (info->stride_shift > PMD_SHIFT) + maxnr = 1; + + /* + * TLB flushes with INVLPGB are kicked off asynchronously. + * The inc_mm_tlb_gen() guarantees page table updates are done + * before these TLB flushes happen. + */ + if (info->end == TLB_FLUSH_ALL) { + invlpgb_flush_single_pcid_nosync(kern_pcid(asid)); + /* Do any CPUs supporting INVLPGB need PTI? */ + if (static_cpu_has(X86_FEATURE_PTI)) + invlpgb_flush_single_pcid_nosync(user_pcid(asid)); + } else for (; addr < info->end; addr += nr << info->stride_shift) { + /* + * Calculate how many pages can be flushed at once; if the + * remainder of the range is less than one page, flush one. + */ + nr = min(maxnr, (info->end - addr) >> info->stride_shift); + nr = max(nr, 1); + + invlpgb_flush_user_nr_nosync(kern_pcid(asid), addr, nr, pmd); + /* Do any CPUs supporting INVLPGB need PTI? */ + if (static_cpu_has(X86_FEATURE_PTI)) + invlpgb_flush_user_nr_nosync(user_pcid(asid), addr, nr, pmd); + } + + finish_asid_transition(info); + + /* Wait for the INVLPGBs kicked off above to finish. */ + tlbsync(); +} +#endif /* CONFIG_X86_BROADCAST_TLB_FLUSH */ + /* * Given an ASID, flush the corresponding user ASID. We can delay this * until the next time we switch to it. @@ -556,8 +856,9 @@ void switch_mm_irqs_off(struct mm_struct *unused, struct mm_struct *next, */ if (prev == next) { /* Not actually switching mm's */ - VM_WARN_ON(this_cpu_read(cpu_tlbstate.ctxs[prev_asid].ctx_id) != - next->context.ctx_id); + VM_WARN_ON(is_dyn_asid(prev_asid) && + this_cpu_read(cpu_tlbstate.ctxs[prev_asid].ctx_id) != + next->context.ctx_id); /* * If this races with another thread that enables lam, 'new_lam' @@ -573,6 +874,23 @@ void switch_mm_irqs_off(struct mm_struct *unused, struct mm_struct *next, !cpumask_test_cpu(cpu, mm_cpumask(next)))) cpumask_set_cpu(cpu, mm_cpumask(next)); + /* + * Check if the current mm is transitioning to a new ASID. + */ + if (needs_global_asid_reload(next, prev_asid)) { + next_tlb_gen = atomic64_read(&next->context.tlb_gen); + + choose_new_asid(next, next_tlb_gen, &new_asid, &need_flush); + goto reload_tlb; + } + + /* + * Broadcast TLB invalidation keeps this PCID up to date + * all the time. + */ + if (is_global_asid(prev_asid)) + return; + /* * If the CPU is not in lazy TLB mode, we are just switching * from one thread in a process to another thread in the same @@ -606,6 +924,13 @@ void switch_mm_irqs_off(struct mm_struct *unused, struct mm_struct *next, */ cond_mitigation(tsk); + /* + * Let nmi_uaccess_okay() and finish_asid_transition() + * know that we're changing CR3. + */ + this_cpu_write(cpu_tlbstate.loaded_mm, LOADED_MM_SWITCHING); + barrier(); + /* * Leave this CPU in prev's mm_cpumask. Atomic writes to * mm_cpumask can be expensive under contention. The CPU @@ -620,14 +945,12 @@ void switch_mm_irqs_off(struct mm_struct *unused, struct mm_struct *next, next_tlb_gen = atomic64_read(&next->context.tlb_gen); choose_new_asid(next, next_tlb_gen, &new_asid, &need_flush); - - /* Let nmi_uaccess_okay() know that we're changing CR3. */ - this_cpu_write(cpu_tlbstate.loaded_mm, LOADED_MM_SWITCHING); - barrier(); } +reload_tlb: new_lam = mm_lam_cr3_mask(next); if (need_flush) { + VM_WARN_ON_ONCE(is_global_asid(new_asid)); this_cpu_write(cpu_tlbstate.ctxs[new_asid].ctx_id, next->context.ctx_id); this_cpu_write(cpu_tlbstate.ctxs[new_asid].tlb_gen, next_tlb_gen); load_new_mm_cr3(next->pgd, new_asid, new_lam, true); @@ -746,7 +1069,7 @@ static void flush_tlb_func(void *info) const struct flush_tlb_info *f = info; struct mm_struct *loaded_mm = this_cpu_read(cpu_tlbstate.loaded_mm); u32 loaded_mm_asid = this_cpu_read(cpu_tlbstate.loaded_mm_asid); - u64 local_tlb_gen = this_cpu_read(cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen); + u64 local_tlb_gen; bool local = smp_processor_id() == f->initiating_cpu; unsigned long nr_invalidate = 0; u64 mm_tlb_gen; @@ -769,6 +1092,16 @@ static void flush_tlb_func(void *info) if (unlikely(loaded_mm == &init_mm)) return; + /* Reload the ASID if transitioning into or out of a global ASID */ + if (needs_global_asid_reload(loaded_mm, loaded_mm_asid)) { + switch_mm_irqs_off(NULL, loaded_mm, NULL); + loaded_mm_asid = this_cpu_read(cpu_tlbstate.loaded_mm_asid); + } + + /* Broadcast ASIDs are always kept up to date with INVLPGB. */ + if (is_global_asid(loaded_mm_asid)) + return; + VM_WARN_ON(this_cpu_read(cpu_tlbstate.ctxs[loaded_mm_asid].ctx_id) != loaded_mm->context.ctx_id); @@ -786,6 +1119,8 @@ static void flush_tlb_func(void *info) return; } + local_tlb_gen = this_cpu_read(cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen); + if (unlikely(f->new_tlb_gen != TLB_GENERATION_INVALID && f->new_tlb_gen <= local_tlb_gen)) { /* @@ -953,7 +1288,7 @@ STATIC_NOPV void native_flush_tlb_multi(const struct cpumask *cpumask, * up on the new contents of what used to be page tables, while * doing a speculative memory access. */ - if (info->freed_tables) + if (info->freed_tables || in_asid_transition(info)) on_each_cpu_mask(cpumask, flush_tlb_func, (void *)info, true); else on_each_cpu_cond_mask(should_flush_tlb, flush_tlb_func, @@ -1049,9 +1384,12 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, * a local TLB flush is needed. Optimize this use-case by calling * flush_tlb_func_local() directly in this case. */ - if (cpumask_any_but(mm_cpumask(mm), cpu) < nr_cpu_ids) { + if (mm_global_asid(mm)) { + broadcast_tlb_flush(info); + } else if (cpumask_any_but(mm_cpumask(mm), cpu) < nr_cpu_ids) { info->trim_cpumask = should_trim_cpumask(mm); flush_tlb_multi(mm_cpumask(mm), info); + consider_global_asid(mm); } else if (mm == this_cpu_read(cpu_tlbstate.loaded_mm)) { lockdep_assert_irqs_enabled(); local_irq_disable(); From patchwork Mon Jan 20 02:40:18 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rik van Riel X-Patchwork-Id: 13944709 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45310C02187 for ; Mon, 20 Jan 2025 02:42:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 61C45280005; Sun, 19 Jan 2025 21:42:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5CC56280003; Sun, 19 Jan 2025 21:42:37 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 42C8E280005; Sun, 19 Jan 2025 21:42:37 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 1EB46280003 for ; Sun, 19 Jan 2025 21:42:37 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id C38391426F8 for ; Mon, 20 Jan 2025 02:42:36 +0000 (UTC) X-FDA: 83026281912.02.6650B19 Received: from shelob.surriel.com (shelob.surriel.com [96.67.55.147]) by imf07.hostedemail.com (Postfix) with ESMTP id 3F64140006 for ; Mon, 20 Jan 2025 02:42:35 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf07.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737340955; a=rsa-sha256; cv=none; b=DGXGUNGoYGcA6FoK32Rob/RLphzgQtPyblH5JeDvbAauG7PlJlnNv38JKioj4Vufk3VjqN mfcWfSIExDuRsuhIXX3U6LhyzWu+VO6vOHSRPm3g3A5MRhWVQIEkyqYQ3MI7bLUdWR8cLD cDTTAJ6lAeC580Vp4R4nf3s374ty29E= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf07.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737340955; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=bH49FN0w3c02qAbIou49rhkotHSjqN3va05zvkVQPxU=; b=OuqhaU/Lf7aaJT/E8C6vxUfPEwwg8hMWBYlqonNJIth1HGsVO2L4ArkB96kRCAlH0ry20L YSZA43NB6kOh5r8PB6FL1fa3Z59DyMudw2dTUbztLI6oU++8U/N1AvZSe/uhbCpJDwDQrX u+BQRoYIlYjHlpPBICjlgk+VCz6D+FE= Received: from fangorn.home.surriel.com ([10.0.13.7]) by shelob.surriel.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.97.1) (envelope-from ) id 1tZhis-000000002w5-1WUb; Sun, 19 Jan 2025 21:41:06 -0500 From: Rik van Riel To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, bp@alien8.de, peterz@infradead.org, dave.hansen@linux.intel.com, zhengqi.arch@bytedance.com, nadav.amit@gmail.com, thomas.lendacky@amd.com, kernel-team@meta.com, linux-mm@kvack.org, akpm@linux-foundation.org, jannh@google.com, mhklinux@outlook.com, andrew.cooper3@citrix.com, Rik van Riel Subject: [PATCH v6 10/12] x86,tlb: do targeted broadcast flushing from tlbbatch code Date: Sun, 19 Jan 2025 21:40:18 -0500 Message-ID: <20250120024104.1924753-11-riel@surriel.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250120024104.1924753-1-riel@surriel.com> References: <20250120024104.1924753-1-riel@surriel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 3F64140006 X-Stat-Signature: 4dg3i9nr7w3oyg7csmiw87uy1w3sqt17 X-Rspam-User: X-HE-Tag: 1737340955-121786 X-HE-Meta: U2FsdGVkX1+nQm4nCW7bnrl5l8SauR2zL1UV2yltIDvvpyacqtyJRhYgHRu73uOPbMpw4s1tMAQlQkp3Z5Pupru9La7/r5fkozS0c4IUQ/rv0VVQAfOcQWJyk4VBWjFxLzosEvvMNEFwhUiKM6O0nr4L+Sld9WFSFR3WnjiEmLQtHyIoueBF9cM7kKL3AUyJv68xfVcm/GL4+h4ZNZ3fX5UF9w6iGUVGno01Zj3FyEXBy/02ZVI5HbHHbXjPh8ZrXtFbGgfsQ+e8D2v0Ud5teFiG97oN+nR6yvMTCHl3v8UPYKY4OTXRc2BXsSP/8HI/chWGevLZU56pR+3PKcMonY0BeHeumwMJowzqCdl3uhYNKDjv9Zq3OehhKHgNbsh0CSUoW+ZdKFSEH4ddKDjEKaGkez1IYggqyMsivSG82vgYtxjOjAo9SjQseEmkA7+Ydmii3m5lZkNpOZF3DF6tAfmIys87R1SZB8i6bnP07oJTMVT0TL1FU6Coe0BW/cipVWRIxkLQaXVj1GWh/w8rFhPQrwkRt82QJzErLDTYPSAo12rGgNDDm4oTEOGdS7MgddYQFDHgRSHN2f277mW0f2xmfDfJ18dDkF68JBnk/0B+ktZZQDiZxDkXiazv03MLGn+JsTTyjFhKjdgi938Cp3iTB477QW3ANR5MBX2fwHXx1bGgbRjLLelKNNPouFK16PDSZX5+mcbMduBDICjH9Laq55c24qMztRgcVzRguuKBPB+DctG5q8ierZQ3zuPmLS0cU3EW8DblbC0erje1XlGcjXuSssB3CNjRMyrLpNrLg1oDlVhLmhJ2ssoswl/wVpUDphwAcS5HFlCSE4jvvYxwaItG0hn9NA9oJ6b4su+SNEgRqJJppR5wOatLESztpZLhcX8FsKgEhiDifgVBdRk3NSiLvZxqhg6Wv5Iho2TulNnSe1DyPWxqdyaE4LbMPExkHRGIbR2Uu34QyqE peFs6jfz r7+RHnsx04kwx/N8OS0M0m4K3Tjv3+EGC//ypr+k9Ty5qmPLGmuGg+a0ycxShFvB8nJgCq8Bb9n69rQsPsfIwicw+oABqGPVqC3AjBvSge6I2oiApmx8sN5ziBQtON4KZHvTEyaLNyvAbfEkjUHUAT/SHMiCoAX7G3YVpeUCL9o0WmEQhd5lzdUWoKEIOhOGRsI5YiZrjuRVoiasdPD2xFkkWy0D59rOJdABhoq3d1bkaG+8pk+QrqkgY5wDJ9+iL1L1tXL4Nudu9y6NVTLVzxiNztbhtlwBVXFgADk0pCD4dMYzVVxbWl0e4seD9MROE+zSQVp+0tL2YBFNyp39ZdpwMyM1+597BrBTiB24CGbccIrgpAbuxgmfW0VDsbv83m8VbKJfwY2idXnhdNu/lxGj4R7ftiUri8NeoZ6NKU24xMsk= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Instead of doing a system-wide TLB flush from arch_tlbbatch_flush, queue up asynchronous, targeted flushes from arch_tlbbatch_add_pending. This also allows us to avoid adding the CPUs of processes using broadcast flushing to the batch->cpumask, and will hopefully further reduce TLB flushing from the reclaim and compaction paths. Signed-off-by: Rik van Riel --- arch/x86/include/asm/tlbbatch.h | 1 + arch/x86/include/asm/tlbflush.h | 12 ++------ arch/x86/mm/tlb.c | 54 +++++++++++++++++++++++++++++++-- 3 files changed, 55 insertions(+), 12 deletions(-) diff --git a/arch/x86/include/asm/tlbbatch.h b/arch/x86/include/asm/tlbbatch.h index 1ad56eb3e8a8..f9a17edf63ad 100644 --- a/arch/x86/include/asm/tlbbatch.h +++ b/arch/x86/include/asm/tlbbatch.h @@ -10,6 +10,7 @@ struct arch_tlbflush_unmap_batch { * the PFNs being flushed.. */ struct cpumask cpumask; + bool used_invlpgb; }; #endif /* _ARCH_X86_TLBBATCH_H */ diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index 5eae5c1aafa5..e5516afdef7d 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -358,21 +358,15 @@ static inline u64 inc_mm_tlb_gen(struct mm_struct *mm) return atomic64_inc_return(&mm->context.tlb_gen); } -static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, - struct mm_struct *mm, - unsigned long uaddr) -{ - inc_mm_tlb_gen(mm); - cpumask_or(&batch->cpumask, &batch->cpumask, mm_cpumask(mm)); - mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL); -} - static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm) { flush_tlb_mm(mm); } extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch); +extern void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, + struct mm_struct *mm, + unsigned long uaddr); static inline bool pte_flags_need_flush(unsigned long oldflags, unsigned long newflags, diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 08eee1f8573a..f731e6cfaa29 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -1659,9 +1659,7 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch) * a local TLB flush is needed. Optimize this use-case by calling * flush_tlb_func_local() directly in this case. */ - if (cpu_feature_enabled(X86_FEATURE_INVLPGB)) { - invlpgb_flush_all_nonglobals(); - } else if (cpumask_any_but(&batch->cpumask, cpu) < nr_cpu_ids) { + if (cpumask_any_but(&batch->cpumask, cpu) < nr_cpu_ids) { flush_tlb_multi(&batch->cpumask, info); } else if (cpumask_test_cpu(cpu, &batch->cpumask)) { lockdep_assert_irqs_enabled(); @@ -1670,12 +1668,62 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch) local_irq_enable(); } + /* + * If we issued (asynchronous) INVLPGB flushes, wait for them here. + * The cpumask above contains only CPUs that were running tasks + * not using broadcast TLB flushing. + */ + if (cpu_feature_enabled(X86_FEATURE_INVLPGB) && batch->used_invlpgb) { + tlbsync(); + migrate_enable(); + batch->used_invlpgb = false; + } + cpumask_clear(&batch->cpumask); put_flush_tlb_info(); put_cpu(); } +void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, + struct mm_struct *mm, + unsigned long uaddr) +{ + if (static_cpu_has(X86_FEATURE_INVLPGB) && mm_global_asid(mm)) { + u16 asid = mm_global_asid(mm); + /* + * Queue up an asynchronous invalidation. The corresponding + * TLBSYNC is done in arch_tlbbatch_flush(), and must be done + * on the same CPU. + */ + if (!batch->used_invlpgb) { + batch->used_invlpgb = true; + migrate_disable(); + } + invlpgb_flush_user_nr_nosync(kern_pcid(asid), uaddr, 1, false); + /* Do any CPUs supporting INVLPGB need PTI? */ + if (static_cpu_has(X86_FEATURE_PTI)) + invlpgb_flush_user_nr_nosync(user_pcid(asid), uaddr, 1, false); + + /* + * Some CPUs might still be using a local ASID for this + * process, and require IPIs, while others are using the + * global ASID. + * + * In this corner case we need to do both the broadcast + * TLB invalidation, and send IPIs. The IPIs will help + * stragglers transition to the broadcast ASID. + */ + if (READ_ONCE(mm->context.asid_transition)) + goto also_send_ipi; + } else { +also_send_ipi: + inc_mm_tlb_gen(mm); + cpumask_or(&batch->cpumask, &batch->cpumask, mm_cpumask(mm)); + } + mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL); +} + /* * Blindly accessing user memory from NMI context can be dangerous * if we're in the middle of switching the current user task or From patchwork Mon Jan 20 02:40:19 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rik van Riel X-Patchwork-Id: 13944703 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87083C02188 for ; Mon, 20 Jan 2025 02:42:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DC5656B0089; Sun, 19 Jan 2025 21:42:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AAFEA6B0088; Sun, 19 Jan 2025 21:42:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5CBD1280003; Sun, 19 Jan 2025 21:42:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 005196B0095 for ; Sun, 19 Jan 2025 21:42:29 -0500 (EST) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 92EFF42A49 for ; Mon, 20 Jan 2025 02:42:29 +0000 (UTC) X-FDA: 83026281618.15.0303CBF Received: from shelob.surriel.com (shelob.surriel.com [96.67.55.147]) by imf30.hostedemail.com (Postfix) with ESMTP id 185E38000F for ; Mon, 20 Jan 2025 02:42:27 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf30.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737340948; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=E11YoPwP2rmb054DCbWgK0xc/jXlKCd3xTPcG1+CCjs=; b=nLl4K9219osqvP9pDmLmSTTjqwTvZM45V9fFhQoPFv9L5eXCX3oCbq1o1XnHMCSrc/F11w Oy/TSZX2g2ktlrtuKB2nIdKRg7PyOKFDlNeDg79kDU0BscYyD/QjWR0eNhmWXNBI6ajhqj zAYlHGgGayP9ENGx0eYhBKHwTXEdQFY= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf30.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737340948; a=rsa-sha256; cv=none; b=79bBq1jjSlH6VnbMwfskoJmCtwaTa1WiEABEVXCH9nLZdQaJ+Ai+njrGMTOIcmluqEahDW 1JVq46gDWA6hA9uDA9F4nwqn1wL6JDJxQEk9LdpRQ211hR1JJ/LqFJM95DOibvw0KRYHkT bz7oA1oWda3Jgar72OIyZPP8tLYhaXA= Received: from fangorn.home.surriel.com ([10.0.13.7]) by shelob.surriel.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.97.1) (envelope-from ) id 1tZhis-000000002w5-1cIp; Sun, 19 Jan 2025 21:41:06 -0500 From: Rik van Riel To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, bp@alien8.de, peterz@infradead.org, dave.hansen@linux.intel.com, zhengqi.arch@bytedance.com, nadav.amit@gmail.com, thomas.lendacky@amd.com, kernel-team@meta.com, linux-mm@kvack.org, akpm@linux-foundation.org, jannh@google.com, mhklinux@outlook.com, andrew.cooper3@citrix.com, Rik van Riel Subject: [PATCH v6 11/12] x86/mm: enable AMD translation cache extensions Date: Sun, 19 Jan 2025 21:40:19 -0500 Message-ID: <20250120024104.1924753-12-riel@surriel.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250120024104.1924753-1-riel@surriel.com> References: <20250120024104.1924753-1-riel@surriel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 185E38000F X-Stat-Signature: u6e7qsc8h11htpwnjjf81qoxq448uxf8 X-Rspam-User: X-HE-Tag: 1737340947-186215 X-HE-Meta: U2FsdGVkX187jFRJ9ckXC3Nd9J0cy27MNoHtBf+AMXEXCXu0ItU59FJu907fAwAjf7rpUbxMzXxxZkoJGw5ilLLYgG1u23yQg7w/En9SoB9X3R16n/Y0a6l2Y50xvz/2XWMoY+K+vl0dvU05Od8Npb/JNjONcKCl7nIxJdC/rCzrDsoXFssPZbjxnbfMerfMrPZ/xizbDGBQsPSqx+2lA5g+J8j8ywlzvOLK3Oj8ijRKAapekprvhuPNavVb9L/uCO3kLGYBn6xN3AsEG52j0RGdL5tTXP72zXy04OLer0hf4W2vbMgUlJqK0PDXO+swgIEFeFv5aQuAQSEod8tAN3CvmIgfovQthsqOx1n4kLhGJ1QF+r32Nc7IH9FodiGUhY4Fcy5hKXWwux1OshM4Wc8A8YQPASjmmajOVJn0DDA61KVTZEHwm479xsU9jK/EBMcKx8GEF8b/fjG8pGcgR9e8IMSIaFq5EUsHnzEEOeVyaXPgLbxjUbid4mEd1ZavThaC3cpr4nohriUKAqvMm9WdlXWNy3z34IX/EPJFXsn9nQx3ewnkho4B9k6JurJS/I8iiq5jolCm5yM0myQ7CMO3iFT0aKPayyjwhoDmlw2guBeQ99Pc7oELszAEGYT4GX4/WYLZ1s1ZQUIVl0d5DW8S9uz0SEGiC2aqIWdmRgTGXWiOlWqKaiVU+tWly6k6+H9E2lM0i89nbhDymVVLHAMKsJBFBqTPjfPkDfqp23dNFQ1wbWRADhckwwfFkdM0/rsrmoRkIv5EAqZ/Gu20RBcUSsM7mtnQi8BVORSa0N2Xjx7NCXrr8foKerd61a9+HFK3b9bZTz+LHZ46EgiKGv3TWQE5pdgyQlq5TzF4dD+bBNbFeLglTdR61pKAiYbuh7EWGfNu7xb2+blp1xToYEzvCko1HGZt5F18hy6SpLL2XFW38H3sS46SOCCB3EiAnn90DrXvz4X7UTEPwWd cfwWS5rb zyTfOuGim0ns6diSMzXekiWWSeTXdjITzKvOsX7UDcbMaX381l0Qeub9rZX2pjcVhNexAhRksUkwXCG8LOvdM5bAt2Vp75xMnLcEpdobr8UVoi9RWVLm55qFRNFXN9nBMg0bpzZuwgpl8yBiO0fL6SXmkEF+unYsYd6IawBZFm5ZyWCJnoi/ENe+tsGU7F9B6GEhcG2kyKfhR+W9VfRPHOqB1OTOyZTEnDch2Wei94yPraF6x7mOAO/egvewkQsL2jw/C8r8o/FEBAi6QePYsLj9rGVBlgYxuyonZhna9Au/sH5gTdb6nZjzP/jiWBcGVB7NzSxGiw3mAcuMe8nGcaoZ7zsDo+PZyN5ipWKqtJa+6HE325NOqOFChc6rihlGk72Ho X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: With AMD TCE (translation cache extensions) only the intermediate mappings that cover the address range zapped by INVLPG / INVLPGB get invalidated, rather than all intermediate mappings getting zapped at every TLB invalidation. This can help reduce the TLB miss rate, by keeping more intermediate mappings in the cache. From the AMD manual: Translation Cache Extension (TCE) Bit. Bit 15, read/write. Setting this bit to 1 changes how the INVLPG, INVLPGB, and INVPCID instructions operate on TLB entries. When this bit is 0, these instructions remove the target PTE from the TLB as well as all upper-level table entries that are cached in the TLB, whether or not they are associated with the target PTE. When this bit is set, these instructions will remove the target PTE and only those upper-level entries that lead to the target PTE in the page table hierarchy, leaving unrelated upper-level entries intact. Signed-off-by: Rik van Riel --- arch/x86/include/asm/msr-index.h | 2 ++ arch/x86/kernel/cpu/amd.c | 4 ++++ tools/arch/x86/include/asm/msr-index.h | 2 ++ 3 files changed, 8 insertions(+) diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h index 3ae84c3b8e6d..dc1c1057f26e 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -25,6 +25,7 @@ #define _EFER_SVME 12 /* Enable virtualization */ #define _EFER_LMSLE 13 /* Long Mode Segment Limit Enable */ #define _EFER_FFXSR 14 /* Enable Fast FXSAVE/FXRSTOR */ +#define _EFER_TCE 15 /* Enable Translation Cache Extensions */ #define _EFER_AUTOIBRS 21 /* Enable Automatic IBRS */ #define EFER_SCE (1<<_EFER_SCE) @@ -34,6 +35,7 @@ #define EFER_SVME (1<<_EFER_SVME) #define EFER_LMSLE (1<<_EFER_LMSLE) #define EFER_FFXSR (1<<_EFER_FFXSR) +#define EFER_TCE (1<<_EFER_TCE) #define EFER_AUTOIBRS (1<<_EFER_AUTOIBRS) /* diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c index bcf73775b4f8..21076252a491 100644 --- a/arch/x86/kernel/cpu/amd.c +++ b/arch/x86/kernel/cpu/amd.c @@ -1071,6 +1071,10 @@ static void init_amd(struct cpuinfo_x86 *c) /* AMD CPUs don't need fencing after x2APIC/TSC_DEADLINE MSR writes. */ clear_cpu_cap(c, X86_FEATURE_APIC_MSRS_FENCE); + + /* Enable Translation Cache Extension */ + if (cpu_feature_enabled(X86_FEATURE_TCE)) + msr_set_bit(MSR_EFER, _EFER_TCE); } #ifdef CONFIG_X86_32 diff --git a/tools/arch/x86/include/asm/msr-index.h b/tools/arch/x86/include/asm/msr-index.h index 3ae84c3b8e6d..dc1c1057f26e 100644 --- a/tools/arch/x86/include/asm/msr-index.h +++ b/tools/arch/x86/include/asm/msr-index.h @@ -25,6 +25,7 @@ #define _EFER_SVME 12 /* Enable virtualization */ #define _EFER_LMSLE 13 /* Long Mode Segment Limit Enable */ #define _EFER_FFXSR 14 /* Enable Fast FXSAVE/FXRSTOR */ +#define _EFER_TCE 15 /* Enable Translation Cache Extensions */ #define _EFER_AUTOIBRS 21 /* Enable Automatic IBRS */ #define EFER_SCE (1<<_EFER_SCE) @@ -34,6 +35,7 @@ #define EFER_SVME (1<<_EFER_SVME) #define EFER_LMSLE (1<<_EFER_LMSLE) #define EFER_FFXSR (1<<_EFER_FFXSR) +#define EFER_TCE (1<<_EFER_TCE) #define EFER_AUTOIBRS (1<<_EFER_AUTOIBRS) /* From patchwork Mon Jan 20 02:40:20 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rik van Riel X-Patchwork-Id: 13944706 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3A005C0218A for ; Mon, 20 Jan 2025 02:42:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6AD406B0093; Sun, 19 Jan 2025 21:42:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 10F32280004; Sun, 19 Jan 2025 21:42:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C18C8280003; Sun, 19 Jan 2025 21:42:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 9678A280004 for ; Sun, 19 Jan 2025 21:42:30 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 59976A25C4 for ; Mon, 20 Jan 2025 02:42:30 +0000 (UTC) X-FDA: 83026281660.23.CB48CB5 Received: from shelob.surriel.com (shelob.surriel.com [96.67.55.147]) by imf02.hostedemail.com (Postfix) with ESMTP id D2B9880008 for ; Mon, 20 Jan 2025 02:42:28 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=none; spf=pass (imf02.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737340948; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Hy4D9J9nmBUb9Lf1G6acOHlxSECGuf5KiKDnY6lG3eQ=; b=csLJOISc15G0eYTzUtc7r07vflor6pGD0rlKr5DtrMCi0EaCQfn0GAj9pdk0roxZnHD3MJ qU9gn/doyFMpioMdBHrzI5vFLVEccHHDzstvg51+FMUsAWG7wmHe4SoLfsam7Qzw+dXZkw yG8Xvx4lTIZjjYefyYGJ5YsIesQQRZs= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737340948; a=rsa-sha256; cv=none; b=3u6tfiyE47cuW4cQ23OXDPHGYj4axtsK/75MYxfR/pJbdvC3NTGsRXUX5CWS6TMBGlMv5j ehc5NBSSHyASxcFHCGDAhaN/NmlezhIxokt4ra2t8vaL7APQDC4DgY2sZclfKenzXr9xIj 68/yrmTQ02cIzN+J0YflRJbv/h2LRSg= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=none; spf=pass (imf02.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com; dmarc=none Received: from fangorn.home.surriel.com ([10.0.13.7]) by shelob.surriel.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.97.1) (envelope-from ) id 1tZhis-000000002w5-1hzA; Sun, 19 Jan 2025 21:41:06 -0500 From: Rik van Riel To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, bp@alien8.de, peterz@infradead.org, dave.hansen@linux.intel.com, zhengqi.arch@bytedance.com, nadav.amit@gmail.com, thomas.lendacky@amd.com, kernel-team@meta.com, linux-mm@kvack.org, akpm@linux-foundation.org, jannh@google.com, mhklinux@outlook.com, andrew.cooper3@citrix.com, Rik van Riel Subject: [PATCH v6 12/12] x86/mm: only invalidate final translations with INVLPGB Date: Sun, 19 Jan 2025 21:40:20 -0500 Message-ID: <20250120024104.1924753-13-riel@surriel.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250120024104.1924753-1-riel@surriel.com> References: <20250120024104.1924753-1-riel@surriel.com> MIME-Version: 1.0 X-Stat-Signature: yym5wto4ncy4o4i4ncodj1xq9qj1oe1u X-Rspamd-Queue-Id: D2B9880008 X-Rspam-User: X-Rspamd-Server: rspam06 X-HE-Tag: 1737340948-521718 X-HE-Meta: U2FsdGVkX1+ERRYISK9dcEOWdA2AGAhttcHjqxYbB1bVj1ajZ8EB2XG9IUyuLOQBXbKAQC0bRuvUtCjW/fKic1U5J93jN0Ms8RD/Yj970SY1FuEBhi9dPbsPjAnDnk4bD7MWhdEWDTHkZAApK0DKkRW7F25ndGsuCW3E1Qt5cDvUtHCpJWyoMUyv9wGJ02nM/etBktrp7BTd9y9C1krCNOk8gtnzDslwecIIukN2CtOApeQ0rTk+Ouo/S4FKlQXacgBgDM0/SHuwEF4LNYD7Toxp6o6TCf5atKWu4cSgWX+GgWW45FVBWPwaJOiEJKNq/XvjSN2zhz4Lxmn5BaK/RdeMCo7B8lL6MzELcqtcoDT03w9Sv1qPGEwzUCM/9ZI92OT7S9xm6/CQz/7yWRHccsFHvZjUyEEZasN38vEhEOnbTp5HX7oqLCb7MFby4eJkCyYjsHmz3ADObjh7YYug8Do9W9ykZehi6UMAYNVoA2mRFlcM0lUWVezNk563dVOx9CmAildmm6NWAaOz71lt/ysdyHZb9+z7CoM/tuEiaev0LKoh3jtnubgJ6Jt+rdCE++C0hNA0evLB1DiOjlkQhtn6uzPJrU9u5WDaXNp8xvAmowcmG/cmLqhKq5v5RRrCklmwIW4yOfNMirRdbiII7JCxBDul+FZwvXNUQaMY9NwJP2E2k70ZVEboF2hKe1wEuowS3uEx3SJcmDTNUX2PfeymSomsmKMh40gqH5xaXO0v/4kfA6BoleErJL2voWfi9wAsiT0bwXZMgpDinAQDd2WefLXMF8S9JYjFQuPKeK4dpZ86f0S2tMFumo0OKdLDwwdqvpUOCs+d2oiDHTqXWXAnf6vfgBpJgJwoOLRwC/TzMjvrahUlXhhoRyTCn5cjvSKQwJYukyl82tzuICVk2XUdvsrw+IDTrXRZgGAwd7VHK5z/jT4fFnaddQJ97YUEUxDjZXLnIMduWhAm5gh PNcVm4sC Sinro0p8R7qTzybpN5nl4UNpL4M/ytKFBt1F0f34RX3J7Un2cB+hnVzrg19V5eCrZLKWzvcOQW7kYwPjEva8ejR/VZQ8CIvoYTM9U2QCxFJss8wV/G6ixbBu1exaifgfGnkqWZ2fOaHaEbmBwzDPJyi8rzSFlPLEGNkuuv8dAYGXWfVrlv7FKQg211wEVKibitqwRnSA2A7DcIMCtYjwx9KDzVH+lg5U3etmNdbXX1RclEkz/aNlybIMSl0BiomX/zoeZwguW5+B38gvFwgcbUlbekkS67ZjFb8bgt7ZqfdfsUgzIXAa2+Qls6VBqoqXDbeLB6m62/ShP8nJ3wWi2YE9Q4kHa3K+QbJoSVlwus8pSBw4XbORJTbiD7qqkOs6UtgBZjkatj4qLE74= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Use the INVLPGB_FINAL_ONLY flag when invalidating mappings with INVPLGB. This way only leaf mappings get removed from the TLB, leaving intermediate translations cached. On the (rare) occasions where we free page tables we do a full flush, ensuring intermediate translations get flushed from the TLB. Signed-off-by: Rik van Riel --- arch/x86/include/asm/invlpgb.h | 10 ++++++++-- arch/x86/mm/tlb.c | 8 ++++---- 2 files changed, 12 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/invlpgb.h b/arch/x86/include/asm/invlpgb.h index 4dfd09e65fa6..418402535319 100644 --- a/arch/x86/include/asm/invlpgb.h +++ b/arch/x86/include/asm/invlpgb.h @@ -63,9 +63,15 @@ static inline void invlpgb_flush_user(unsigned long pcid, static inline void invlpgb_flush_user_nr_nosync(unsigned long pcid, unsigned long addr, u16 nr, - bool pmd_stride) + bool pmd_stride, + bool freed_tables) { - __invlpgb(0, pcid, addr, nr - 1, pmd_stride, INVLPGB_PCID | INVLPGB_VA); + unsigned long flags = INVLPGB_PCID | INVLPGB_VA; + + if (!freed_tables) + flags |= INVLPGB_FINAL_ONLY; + + __invlpgb(0, pcid, addr, nr - 1, pmd_stride, flags); } /* Flush all mappings for a given PCID, not including globals. */ diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index f731e6cfaa29..4057afb6edc0 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -538,10 +538,10 @@ static void broadcast_tlb_flush(struct flush_tlb_info *info) nr = min(maxnr, (info->end - addr) >> info->stride_shift); nr = max(nr, 1); - invlpgb_flush_user_nr_nosync(kern_pcid(asid), addr, nr, pmd); + invlpgb_flush_user_nr_nosync(kern_pcid(asid), addr, nr, pmd, info->freed_tables); /* Do any CPUs supporting INVLPGB need PTI? */ if (static_cpu_has(X86_FEATURE_PTI)) - invlpgb_flush_user_nr_nosync(user_pcid(asid), addr, nr, pmd); + invlpgb_flush_user_nr_nosync(user_pcid(asid), addr, nr, pmd, info->freed_tables); } finish_asid_transition(info); @@ -1700,10 +1700,10 @@ void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, batch->used_invlpgb = true; migrate_disable(); } - invlpgb_flush_user_nr_nosync(kern_pcid(asid), uaddr, 1, false); + invlpgb_flush_user_nr_nosync(kern_pcid(asid), uaddr, 1, false, false); /* Do any CPUs supporting INVLPGB need PTI? */ if (static_cpu_has(X86_FEATURE_PTI)) - invlpgb_flush_user_nr_nosync(user_pcid(asid), uaddr, 1, false); + invlpgb_flush_user_nr_nosync(user_pcid(asid), uaddr, 1, false, false); /* * Some CPUs might still be using a local ASID for this