From patchwork Thu Jan 11 06:07:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13516789 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D662EC47258 for ; Thu, 11 Jan 2024 06:08:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4B9396B009D; Thu, 11 Jan 2024 01:08:35 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 467CF6B009E; Thu, 11 Jan 2024 01:08:35 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2E47E6B009F; Thu, 11 Jan 2024 01:08:35 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 18FE06B009D for ; Thu, 11 Jan 2024 01:08:35 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 4CAA31A0C03 for ; Thu, 11 Jan 2024 06:08:15 +0000 (UTC) X-FDA: 81666000150.21.4A7DE8C Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by imf24.hostedemail.com (Postfix) with ESMTP id 70DB2180011 for ; Thu, 11 Jan 2024 06:08:13 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf24.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704953293; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references; bh=deHIYqkuaMMowWAwSsScVOmIndLCnwrqbZwxQiYz+QA=; b=tUxU6X0VWai0lHC5kogZB5LDk37mwW0787ZQTGn/A/YwLHIXd6ECgJw7GwtsLd//S4VMOT vV31LrCMRt0XvHKbf/QRfII7R5RlgQqyFu/L6lZHnvK3hub85jrFyrbpdqPGQZK+xpmLHg l1h18IJqXyxuPAowhs5BoE8gB8FsbUA= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf24.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704953293; a=rsa-sha256; cv=none; b=vJuL5/IChIqQwSCNz0nWdJtc745R23sITTPrA4PVuH/QYKnrusiydbI5vWQi5Kzaq7B1IC pe4UHcKXiOwH5sBwmhl8pN68s+p5bXdNQhyY3tBUKVkAHpFoDL7N2n/Nq1y3nXvo1KWAJG mFWpkh7RMGlIW8VCxIXLX+ZDmUFMMSM= X-AuditID: a67dfc5b-d6dff70000001748-e8-659f85c8eef1 From: Byungchul Park To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: kernel_team@skhynix.com, akpm@linux-foundation.org, ying.huang@intel.com, namit@vmware.com, xhao@linux.alibaba.com, mgorman@techsingularity.net, hughd@google.com, willy@infradead.org, david@redhat.com, peterz@infradead.org, luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com Subject: [v5 3/7] mm/rmap: Recognize read-only TLB entries during batched TLB flush Date: Thu, 11 Jan 2024 15:07:53 +0900 Message-Id: <20240111060757.13563-4-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240111060757.13563-1-byungchul@sk.com> References: <20240111060757.13563-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrELMWRmVeSWpSXmKPExsXC9ZZnoe6J1vmpBi8W8VnMWb+GzeLzhn9s Fi82tDNafF3/i9ni6ac+FovLu+awWdxb85/V4vyutawWO5buY7K4dGABk8X1XQ8ZLY73HmCy 2LxpKrPF7x9AdXOmWFmcnDWZxUHA43trH4vHgk2lHptXaHks3vOSyWPTqk42j02fJrF7vDt3 jt3jxIzfLB47H1p6zDsZ6PF+31U2j62/7Dw+b5LzeDf/LVsAXxSXTUpqTmZZapG+XQJXxtd5 TgXHpSs+fNrB2MD4QKyLkZNDQsBEYsrSBkYY+27PBGYQm01AXeLGjZ9gtoiAmcTB1j/sXYxc HMwCD5gk5r5dAdYgLBAscej9NlYQm0VAVWLO33dARRwcvAKmEn+mQM2Xl1i94QDYHE6gORcO z2IEKRECKnnUVgcyUkLgPZvEm4sTmSHqJSUOrrjBMoGRdwEjwypGocy8stzEzBwTvYzKvMwK veT83E2MwJBfVvsnegfjpwvBhxgFOBiVeHgfLJqXKsSaWFZcmXuIUYKDWUmEV+HznFQh3pTE yqrUovz4otKc1OJDjNIcLErivEbfylOEBNITS1KzU1MLUotgskwcnFINjPKH1Z8pHNc4dKyz tOrYkmuah7aJnXLVqlb+zbRD79/NCo32CTs5mnNDWFI6WDp4Vh+TZOt3f2/8wJ7v8fo9c96/ /jRrNXNBQZ7NUSEFzvf6xYsYdA6xstr/6oh0qJTs2fKB876p/jGbrFNBNls2Bfu6G15i9NCd I1rMlvt95+RHG2Ru3GsyV2Ipzkg01GIuKk4EALcZgJl1AgAA X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrJLMWRmVeSWpSXmKPExsXC5WfdrHuidX6qwedPTBZz1q9hs/i84R+b xYsN7YwWX9f/YrZ4+qmPxeLw3JOsFpd3zWGzuLfmP6vF+V1rWS12LN3HZHHpwAImi+u7HjJa HO89wGSxedNUZovfP4Dq5kyxsjg5azKLg6DH99Y+Fo8Fm0o9Nq/Q8li85yWTx6ZVnWwemz5N Yvd4d+4cu8eJGb9ZPHY+tPSYdzLQ4/2+q2wei198YPLY+svO4/MmOY9389+yBfBHcdmkpOZk lqUW6dslcGV8nedUcFy64sOnHYwNjA/Euhg5OSQETCTu9kxgBrHZBNQlbtz4CWaLCJhJHGz9 w97FyMXBLPCASWLu2xWMIAlhgWCJQ++3sYLYLAKqEnP+vgMq4uDgFTCV+DMFaqa8xOoNB8Dm cALNuXB4FiNIiRBQyaO2ugmMXAsYGVYximTmleUmZuaY6hVnZ1TmZVboJefnbmIEBvCy2j8T dzB+uex+iFGAg1GJh9fg5bxUIdbEsuLK3EOMEhzMSiK8Cp/npArxpiRWVqUW5ccXleakFh9i lOZgURLn9QpPTRASSE8sSc1OTS1ILYLJMnFwSjUwVoZaMSzSWsZ4hOEQn9GVW1w7NggdLGs8 eOidjG+ES85q2YbXR716b6R8id/HrFf/Qu+5hVLa0kkWz6Ia97bpTbigEnXP9hRb8J62+3Ok suo5TyhV/Vh36Pn7pJcHZr8XLes59rV16uKvpUHz7tf0335oIN51f4b7CvbzHstOspqfcDwp GNyxUYmlOCPRUIu5qDgRAM7hR7pcAgAA X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: 70DB2180011 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: w959bu6e3o341nsd9scc6qk1x7b5fntd X-HE-Tag: 1704953293-167476 X-HE-Meta: U2FsdGVkX1+jvnqF6y2qhyx9ONtB4FP3wXkks7sKIK3TOBLziQqrRux5juDBrH7uc0I7KnylKdy/AR9DymTe78MqNMAE1Pd9evgC2+Fcy0LrWXra+7a4oLcebH9dP1h/APsGzw4ygCrcMlbEAzyK4srWZ+BcS1p3/W0XoVLgOQx5cmIi+gYDYPhnQLAtpbpzyFjL+Ev6MeRAQWfhdoFznfK1Hs8cmmwwuGgdSLBOftoG6Pvhzg52dk3ZcEDTCyGNKXc+tRQXlDdXpAU6L1FnFcrDEpRP7RfEhIfqENRalwWYYN0Ybk+wdx/bBFP36gyM6ecGbii1EJ94xjVqA4sQo7IVMccUnxrXsP5ngVWhuOcpn2uQWCi06QFRyFg/6onZdd3qcg+byH0+Y+zF9V6EBrzY2ag69coQCvA9kvFUVXbcmOeAtiPeZ+gKbWboTU3BiYzkhD5ZytJEHYF7X96V6dYXaAAt8nO6PManJrqfzWd7ndzZT3oVNisYKLa9xBY/F04X279MA3cbUJy9rBCYMbEn8XXdBXvYaP78tiwxK9YRG4kP0TCQpyMGa6n/oEz6KcmPoFFmvp5nQ1/JKgvrz4bvnX580VaQCaQu3ofDc+i64fUu9dXEDGA7F1LTBZVDYqlPFILvWcB3voWlfBJ/UtztT+jec4UNiodl9AUYmBKFB1+FpblQOx5NYriR3gGeke/kDuIaxI9MH1Wp2XomGuvOcZ0LrSFTAvndPdZ+/oVoyGVl19riID+plf8FRwOKqiaA2/ac0wed/g+sbqFkeAuGyh2ttNqbdNbMXiFNebvtqysKn9ZDF3WXEKIDqGo5FbxdEbfgrg3OsBIFhbvciCLWjpVYoKaAx277IUBcN6LX9cb8N0UyHFsox9+ksBTBOnm+WgqhBdFu/ESD9zeviElYjBGj48ssGR77rqWU6CxH7liy75Z0CeIIsW1vRL1dSN00HqsL9TEkIcWOZjJ 4Eg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Functionally, no change. This is a preparation for migrc mechanism that requires to recognize read-only TLB entries and makes use of them to batch more aggressively. Plus, the newly introduced API, fold_ubc() will be used by migrc mechanism when manipulating tlb batch data. Signed-off-by: Byungchul Park --- include/linux/sched.h | 1 + mm/internal.h | 4 ++++ mm/rmap.c | 31 ++++++++++++++++++++++++++++++- 3 files changed, 35 insertions(+), 1 deletion(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 292c31697248..0317e7a65151 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1328,6 +1328,7 @@ struct task_struct { #endif struct tlbflush_unmap_batch tlb_ubc; + struct tlbflush_unmap_batch tlb_ubc_ro; /* Cache last used pipe for splice(): */ struct pipe_inode_info *splice_pipe; diff --git a/mm/internal.h b/mm/internal.h index b61034bd50f5..b880f1e78700 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -923,6 +923,7 @@ extern struct workqueue_struct *mm_percpu_wq; void try_to_unmap_flush(void); void try_to_unmap_flush_dirty(void); void flush_tlb_batched_pending(struct mm_struct *mm); +void fold_ubc(struct tlbflush_unmap_batch *dst, struct tlbflush_unmap_batch *src); #else static inline void try_to_unmap_flush(void) { @@ -933,6 +934,9 @@ static inline void try_to_unmap_flush_dirty(void) static inline void flush_tlb_batched_pending(struct mm_struct *mm) { } +static inline void fold_ubc(struct tlbflush_unmap_batch *dst, struct tlbflush_unmap_batch *src) +{ +} #endif /* CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH */ extern const struct trace_print_flags pageflag_names[]; diff --git a/mm/rmap.c b/mm/rmap.c index 7a27a2b41802..da36f23ff7b0 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -605,6 +605,28 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio, } #ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH + +void fold_ubc(struct tlbflush_unmap_batch *dst, + struct tlbflush_unmap_batch *src) +{ + if (!src->flush_required) + return; + + /* + * Fold src to dst. + */ + arch_tlbbatch_fold(&dst->arch, &src->arch); + dst->writable = dst->writable || src->writable; + dst->flush_required = true; + + /* + * Reset src. + */ + arch_tlbbatch_clear(&src->arch); + src->flush_required = false; + src->writable = false; +} + /* * Flush TLB entries for recently unmapped pages from remote CPUs. It is * important if a PTE was dirty when it was unmapped that it's flushed @@ -614,7 +636,9 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio, void try_to_unmap_flush(void) { struct tlbflush_unmap_batch *tlb_ubc = ¤t->tlb_ubc; + struct tlbflush_unmap_batch *tlb_ubc_ro = ¤t->tlb_ubc_ro; + fold_ubc(tlb_ubc, tlb_ubc_ro); if (!tlb_ubc->flush_required) return; @@ -645,13 +669,18 @@ void try_to_unmap_flush_dirty(void) static void set_tlb_ubc_flush_pending(struct mm_struct *mm, pte_t pteval, unsigned long uaddr) { - struct tlbflush_unmap_batch *tlb_ubc = ¤t->tlb_ubc; + struct tlbflush_unmap_batch *tlb_ubc; int batch; bool writable = pte_dirty(pteval); if (!pte_accessible(mm, pteval)) return; + if (pte_write(pteval) || writable) + tlb_ubc = ¤t->tlb_ubc; + else + tlb_ubc = ¤t->tlb_ubc_ro; + arch_tlbbatch_add_pending(&tlb_ubc->arch, mm, uaddr); tlb_ubc->flush_required = true;