From patchwork Thu Feb 6 04:43:22 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rik van Riel X-Patchwork-Id: 13962185 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B925CC0219B for ; Thu, 6 Feb 2025 04:45:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8CBC06B008A; Wed, 5 Feb 2025 23:45:13 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3BBD16B0098; Wed, 5 Feb 2025 23:45:13 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1E6406B0089; Wed, 5 Feb 2025 23:45:12 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id C5A336B0095 for ; Wed, 5 Feb 2025 23:45:12 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 810841C86DB for ; Thu, 6 Feb 2025 04:45:12 +0000 (UTC) X-FDA: 83088280464.18.58CF0BD Received: from shelob.surriel.com (shelob.surriel.com [96.67.55.147]) by imf18.hostedemail.com (Postfix) with ESMTP id ED1041C000B for ; Thu, 6 Feb 2025 04:45:10 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=none; spf=pass (imf18.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738817111; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RQuwox4NCDjZmuGODsKLFwmLqtB/IejGr4b5qSu3yYE=; b=VaNdUoi4sm/Lw/JzzUtDF+m/jnc5Y0MGRmXQlMrB9FDwza0TQ20BUvPNTGshG2o3W36ZhI 5tcAeVs+vtaLdyqIavl/mxi1rNPKCyZ1+KWIYE5iYoY5eCDd6c1zT+oq2VeOq4RjBlgEju mMGY2VNWAe0xB6gqjOLfw0IcfWAN7NE= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=none; spf=pass (imf18.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738817111; a=rsa-sha256; cv=none; b=8IGGiQcXqiCVkt4MNadvp1MwMDjtuKLgnNnnDpLE/+Muw9H0CPc7jfV+ROCUlYpdmj/iq4 VvvuQUqJLR+46/DVsszpwtOqmPVRoQi+35VnONXvJ+d6ucOwjWKB4R/g/JGHPuLKeDOIYp 1h05tM8rL+JT9LF2TUQH/rLcfBkG2NA= Received: from fangorn.home.surriel.com ([10.0.13.7]) by shelob.surriel.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.97.1) (envelope-from ) id 1tftjw-000000004tQ-1w6p; Wed, 05 Feb 2025 23:43:48 -0500 From: Rik van Riel To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, bp@alien8.de, peterz@infradead.org, dave.hansen@linux.intel.com, zhengqi.arch@bytedance.com, nadav.amit@gmail.com, thomas.lendacky@amd.com, kernel-team@meta.com, linux-mm@kvack.org, akpm@linux-foundation.org, jannh@google.com, mhklinux@outlook.com, andrew.cooper3@citrix.com, Rik van Riel , Dave Hansen Subject: [PATCH v9 03/12] x86/mm: consolidate full flush threshold decision Date: Wed, 5 Feb 2025 23:43:22 -0500 Message-ID: <20250206044346.3810242-4-riel@surriel.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250206044346.3810242-1-riel@surriel.com> References: <20250206044346.3810242-1-riel@surriel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: ED1041C000B X-Stat-Signature: s9wyihdmm9ofwwpt7y463ar5ouc5doj4 X-Rspam-User: X-HE-Tag: 1738817110-476141 X-HE-Meta: U2FsdGVkX1/SRxNdTBpLEPdWEcOrs8kwDgAEt6S7OrKQJC1NZjlh+krYxUlqQgUoCMNtuuYWE9XrYuKYxSybCSrUN7UywqDUiMiCF5Sx4IyglQ+4wfqnOd5BNesN/XO2A0JsRqBGA1XQ6vgL1qSW/5nLT3x1OdH+Rwel/GTjjh18mZIgBCF4lVvdRPaHxGeMs0CIUHQROUPz2mc9VW3/ydG9LtAmTf7DiKGU+avbhSUxYvTUy1BR+0ceGOKBsXGtWQi74MgCN9TQb46i3x80QoBFV0N6urHgo6n9WQ7UCsnRNxve6FDUY8DZILrn+IhH+cerXOr9hKaOlz3vp+G35iALnNwPiBC7kUp3no2OEBsRJlPIYBdLFEWLl7/0X/lZ2J+Zi7FVeoiCnbINo1belCsomVUGHmKdjCDjSqdCNf7KUttoyR1D3K7qhAmNKuiZaQuTsSz0KDdWWuieEMz9hYFRDhMwAv8KF3X5XBTYZgZFYPIm0Sfsl+Op1vEZ728PY5D1Mmj4pG/RvOE1Z5mEcr6TY+ZbE6wggtGUXb/rHEuOSb0X26t88vCTUw0PwWhlafQgZYJVjG9TetkxiehLRRsv7988mAs7y1ioImlZebJqVj4zggwhHpJvTkXppRzrZuz8pnl7Z2Z8XK3Kvcg96SLnXeUREPhXf+AwPd+se/R57KGixWlSP6r/T/T+mb6ADv7inT9k5FExGguBFPfp3QiFhvZ1IvTgw4YyvGuNBaJ55ZoJ7T+vMbAaKUopYEvs9rDtHblGVXPBaEuBGrBv5yH9GR75tNpz3mbqlUORdJzrELFVMOWK5O06VhrofVlY73I99tiUH2n0Ai9ElxZRdAfDARTNfP9/P/GAndGZWlXZDyGlfEOHOG+J6R7cncssmGRvf8h9o+B0Hn4PVWt6YOSbBnkYJC4QGiPxYm4nIzYKi2oPcvxEDSxanfpR/ox/1RV5/JP1mFpX9QlxfO6 VeaUHF4h p6QvDf3ZTf30dj0syl+++aC5GU8GFwc4mJX5plxuUSVllguP7BdzseR2PH0pbK0v3I6SoD163izJh+wLB6zHj8PDkNe4jEdlqVXg/JrLJYte7vx3IRVLvmXNajyIhGtpNCArgrouChGDU3inEe9UO6Izf+vxX1VKNqWWV3qDldHk8zOI6tU3+rnokL9XX9Gah3RPMcxMA2K0yoa038KJODO2VFRE1iJFdNazO3l4bliwnsXPiIIE1Hgl2RvXZB3oV9sOdFsECob/uQkNiGK1aNy7tSYhYZSjfv9m5RKSZUcpAxtzvx0bZ/EdRObKpBGfMto3QBX5mB2LKZt5Ht5EVMTL73MgN2/xArHSUeazzujskt/S8CJrXrngkIF400W/GDEDY1CH6Y7rilTQC17XMbFeLvU9MaA8u8IYTQZevALz1aGgyVKerQMtu0u1Aum1UzsqDJQPR0kz0IgVFDnPYVvI/Ve77wR9Yw64lJAyshLfJAzi7vROdSsTgRQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Reduce code duplication by consolidating the decision point for whether to do individual invalidations or a full flush inside get_flush_tlb_info. Signed-off-by: Rik van Riel Suggested-by: Dave Hansen --- arch/x86/mm/tlb.c | 56 ++++++++++++++++++++++++++--------------------- 1 file changed, 31 insertions(+), 25 deletions(-) diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 6cf881a942bb..36939b104561 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -1000,8 +1000,13 @@ static struct flush_tlb_info *get_flush_tlb_info(struct mm_struct *mm, BUG_ON(this_cpu_inc_return(flush_tlb_info_idx) != 1); #endif - info->start = start; - info->end = end; + /* + * Round the start and end addresses to the page size specified + * by the stride shift. This ensures partial pages at the end of + * a range get fully invalidated. + */ + info->start = round_down(start, 1 << stride_shift); + info->end = round_up(end, 1 << stride_shift); info->mm = mm; info->stride_shift = stride_shift; info->freed_tables = freed_tables; @@ -1009,6 +1014,19 @@ static struct flush_tlb_info *get_flush_tlb_info(struct mm_struct *mm, info->initiating_cpu = smp_processor_id(); info->trim_cpumask = 0; + WARN_ONCE(start != info->start || end != info->end, + "TLB flush not stride %x aligned. Start %lx, end %lx\n", + 1 << stride_shift, start, end); + + /* + * If the number of flushes is so large that a full flush + * would be faster, do a full flush. + */ + if ((end - start) >> stride_shift > tlb_single_page_flush_ceiling) { + info->start = 0; + info->end = TLB_FLUSH_ALL; + } + return info; } @@ -1026,17 +1044,8 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, bool freed_tables) { struct flush_tlb_info *info; + int cpu = get_cpu(); u64 new_tlb_gen; - int cpu; - - cpu = get_cpu(); - - /* Should we flush just the requested range? */ - if ((end == TLB_FLUSH_ALL) || - ((end - start) >> stride_shift) > tlb_single_page_flush_ceiling) { - start = 0; - end = TLB_FLUSH_ALL; - } /* This is also a barrier that synchronizes with switch_mm(). */ new_tlb_gen = inc_mm_tlb_gen(mm); @@ -1089,22 +1098,19 @@ static void do_kernel_range_flush(void *info) void flush_tlb_kernel_range(unsigned long start, unsigned long end) { - /* Balance as user space task's flush, a bit conservative */ - if (end == TLB_FLUSH_ALL || - (end - start) > tlb_single_page_flush_ceiling << PAGE_SHIFT) { - on_each_cpu(do_flush_tlb_all, NULL, 1); - } else { - struct flush_tlb_info *info; + struct flush_tlb_info *info; - preempt_disable(); - info = get_flush_tlb_info(NULL, start, end, 0, false, - TLB_GENERATION_INVALID); + guard(preempt)(); + + info = get_flush_tlb_info(NULL, start, end, PAGE_SHIFT, false, + TLB_GENERATION_INVALID); + if (info->end == TLB_FLUSH_ALL) + on_each_cpu(do_flush_tlb_all, NULL, 1); + else on_each_cpu(do_kernel_range_flush, info, 1); - put_flush_tlb_info(); - preempt_enable(); - } + put_flush_tlb_info(); } /* @@ -1276,7 +1282,7 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch) int cpu = get_cpu(); - info = get_flush_tlb_info(NULL, 0, TLB_FLUSH_ALL, 0, false, + info = get_flush_tlb_info(NULL, 0, TLB_FLUSH_ALL, PAGE_SHIFT, false, TLB_GENERATION_INVALID); /* * flush_tlb_multi() is not optimized for the common case in which only