From patchwork Thu Feb 20 05:20:23 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13983338 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD622C021AD for ; Thu, 20 Feb 2025 05:21:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A31D828018E; Thu, 20 Feb 2025 00:20:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 96B5D280170; Thu, 20 Feb 2025 00:20:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7E78A28018E; Thu, 20 Feb 2025 00:20:50 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 4FEAE28013A for ; Thu, 20 Feb 2025 00:20:50 -0500 (EST) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id EA6CF80401 for ; Thu, 20 Feb 2025 05:20:49 +0000 (UTC) X-FDA: 83139173418.10.473B2B8 Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by imf05.hostedemail.com (Postfix) with ESMTP id 11287100002 for ; Thu, 20 Feb 2025 05:20:47 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=none; spf=pass (imf05.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740028848; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references; bh=tg2ormUe7Dk+B90+06rL39NFasy1qx1uIZuUCnos9Eo=; b=D+cK2V5m48xQsyJ7HxcYn0rogmJvVgre/0YnMpH2A5w8y21Xyrhcckn1EIIHAaog+DHYAk h+LXxLQ/6MHB4HHycstMz5YGFpQyeMXUCbaS75QP6UYo6da7c7GdoXZU+a04HQhI7itndP 6e43EsLU94XRCyw6t/vCvbZlAsOEUAM= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=none; spf=pass (imf05.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740028848; a=rsa-sha256; cv=none; b=u8+tYHnugP7xFHsnBfTrgvS7EOG8f6Zt7T9WO0zI0U2iWdoxiL3AAmI/FVUKZu2V9cEdia 1j6P17JIkrSjbWgaqNPE6aYf1nxwNYoQ6ENscpil6ju8gOKUV14b1Azs3i3ESGYQAvXGwE 2toTOEqJc3vvfHTSlbmIC+vWH7lmWx8= X-AuditID: a67dfc5b-3c9ff7000001d7ae-1c-67b6bba7c1f0 From: Byungchul Park To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: kernel_team@skhynix.com, akpm@linux-foundation.org, ying.huang@intel.com, vernhao@tencent.com, mgorman@techsingularity.net, hughd@google.com, willy@infradead.org, david@redhat.com, peterz@infradead.org, luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, rjgolo@gmail.com Subject: [RFC PATCH v12 22/26] mm/page_alloc: not allow to tlb shootdown if !preemptable() && non_luf_pages_ok() Date: Thu, 20 Feb 2025 14:20:23 +0900 Message-Id: <20250220052027.58847-23-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20250220052027.58847-1-byungchul@sk.com> References: <20250220052027.58847-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrCLMWRmVeSWpSXmKPExsXC9ZZnoe7y3dvSDTafkbKYs34Nm8XnDf/Y LF5saGe0+Lr+F7PF0099LBaXd81hs7i35j+rxflda1ktdizdx2Rx6cACJovjvQeYLObf+8xm sXnTVGaL41OmMlr8/gFUfHLWZBYHAY/vrX0sHjtn3WX3WLCp1GPzCi2PxXteMnlsWtXJ5rHp 0yR2j3fnzrF7nJjxm8Vj3slAj/f7rrJ5bP1l59E49Rqbx+dNcgF8UVw2Kak5mWWpRfp2CVwZ /z8LFXyxqnjyaxJrA+M7gy5GDg4JAROJCQ/9uhg5wcwZvfeYQWw2AXWJGzd+gtkiAmYSB1v/ sIPYzAJ3mSQO9LOB2MICBRLbfpwFi7MIqEosODeZBcTmBao/umcyO8RMeYnVGw6AzeEEiv+Y 0QvWKyRgKvFuwSWmLkYuoJr3bBKXmjczQjRIShxccYNlAiPvAkaGVYxCmXlluYmZOSZ6GZV5 mRV6yfm5mxiBYb+s9k/0DsZPF4IPMQpwMCrx8M5o3ZYuxJpYVlyZe4hRgoNZSYS3rX5LuhBv SmJlVWpRfnxRaU5q8SFGaQ4WJXFeo2/lKUIC6YklqdmpqQWpRTBZJg5OqQZGqb/ib325lm7U UV2V5PjJkflWyEXbJsZVFQpHXhycpc0a+Ez9Za3bt5odtx9MjLlzaOLt92pim7WZll2Ynb/n 3ku9U171OvN72DZe7z0zWdR9wp13LoYLvTp3f5Pe5JAcflVFz3phb424T6oqo7X2gcK3bk+O vQ5bpfx/tkRe0jEzKa/dR5aoKrEUZyQaajEXFScCAB6fEFp3AgAA X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrDLMWRmVeSWpSXmKPExsXC5WfdrLt897Z0g62/BSzmrF/DZvF5wz82 ixcb2hktvq7/xWzx9FMfi8XhuSdZLS7vmsNmcW/Nf1aL87vWslrsWLqPyeLSgQVMFsd7DzBZ zL/3mc1i86apzBbHp0xltPj9A6j45KzJLA6CHt9b+1g8ds66y+6xYFOpx+YVWh6L97xk8ti0 qpPNY9OnSewe786dY/c4MeM3i8e8k4Ee7/ddZfNY/OIDk8fWX3YejVOvsXl83iQXwB/FZZOS mpNZllqkb5fAlfH/s1DBF6uKJ78msTYwvjPoYuTkkBAwkZjRe48ZxGYTUJe4ceMnmC0iYCZx sPUPO4jNLHCXSeJAPxuILSxQILHtx1mwOIuAqsSCc5NZQGxeoPqjeyazQ8yUl1i94QDYHE6g +I8ZvWC9QgKmEu8WXGKawMi1gJFhFaNIZl5ZbmJmjqlecXZGZV5mhV5yfu4mRmAYL6v9M3EH 45fL7ocYBTgYlXh4Hzzemi7EmlhWXJl7iFGCg1lJhLetfku6EG9KYmVValF+fFFpTmrxIUZp DhYlcV6v8NQEIYH0xJLU7NTUgtQimCwTB6dUA+OKor7yNV8CgneJvYxO6zj1tKOtUPXc/jam j+aXcvfd6DU3lN+3e6l/9NHNlZvmJNb5vzhx6ejWwB3Pt8ZKLfJgWybJU14fNH3tmbVV4u8U Mja+5f4TKl01qfT16oUn9vQ9Ulq/4EzE9YeL576ocf+UXO2/JrAy97ZTx4rvh8IULnfrfJu+ 76qREktxRqKhFnNRcSIAHburZl8CAAA= X-CFilter-Loop: Reflected X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 11287100002 X-Stat-Signature: 8em5j6bomkr4bttd5imaxs5ig45xhzjc X-HE-Tag: 1740028847-25331 X-HE-Meta: U2FsdGVkX19SUeVH7aRVL3s1RJOwe356m5PVCi7CG315+DXPO72VnSx7y4KQOC9dgb48zl2vM69RumpkjoYH6Kd/xt9tZjzTGQU3GctdjsV8nB5vGJfHoOvN1ZKzcQg8hgipvJiN94RSJRnqfCR0TvglabYdFDWSKIO9O7H103I8g3g2eg1hKIJbSzeKf+gwynVhfhfnygpBZg4wAY3D3i8TuRkA4ETCxCLCBlIUVe3Yqo3JHGcItkpXqx5fLnVpKDumVld9pad/H5GXV1tgM5fW1MmUSQ8NRslcRIuYIHpl7lnzg4LQ7AZqDjDCRHtdn+xaYRdNdbO4U3gnEOEIkVmO4OMNX4LpWoboJRRzLCuk1loZ5Q1rvEWpYlJWpkMvuSHdkcNwkpyWHRvxzQddPwdwDB6F2dy28hKQjI/Pkk0c/MUCBnPH7xcO9UByQCGDGFMiahVxiEJNmlGhlrn1reiiQDJdDClKrMsPO7Nz9JC+H5MAcKl42KgUf96gj/iXckesCECklvR/Z9pYSzvDi+b0egQKNoWhh0rFAdYfv6dejn4yPeJZ6xn0IobZH6Rmysdb1NfZg+YBX/TasKO+gKdNG3WfNG7qNgdraoUVbFecZCWk+MbF5Tjq62J3YvflttQKaafF/DPl3ZTiFl8tr82MEECaUQeFqXlvjEcov29lJRv0QdiVIgGaiBvGkuzANbpnk5zq0bUfcuz4VuTljGNT0UbT/qG/FyhZBlj5jW2wlnRnNa04yRtwl8bzWnZ9wbwmBsLAKjUMkxcqaAa/LhVEDD6/w/3uo7//xj/SFKXf2WYzIkREcHZkhxR7oLmMo5m9GPBeSp+PqmtmbI7IFjCaQXAxF3FVkaTpoUcjxCyMYjjxo44yi7yYQq/n6/IhcgbWghfgeDV86G9xYCYeUWsoWzbQW11eY1ybC7KQnJ+fj7Cfxq3rSBTxJ1pkiVj6RPqf5f4MBU4HuG/r+4C i6D5SXLz NFb5PPvSziFA3MlHjPYj7OMjBU8zEX+z7HdFewlElxnZLZZyIOgz6T8Tg3sML8fJzgkrny8SfxA0y4+4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Do not perform tlb shootdown if the context is in preempt disable and there are already enough non luf pages, not to hurt preemptibility. Signed-off-by: Byungchul Park --- mm/compaction.c | 6 +++--- mm/internal.h | 5 +++-- mm/page_alloc.c | 27 +++++++++++++++------------ mm/page_isolation.c | 2 +- mm/page_reporting.c | 4 ++-- 5 files changed, 24 insertions(+), 20 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index a7f17867decae..8fa9de6db2441 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -605,7 +605,7 @@ static unsigned long isolate_freepages_block(struct compact_control *cc, page = pfn_to_page(blockpfn); - luf_takeoff_start(); + luf_takeoff_start(cc->zone); /* Isolate free pages. */ for (; blockpfn < end_pfn; blockpfn += stride, page += stride) { int isolated; @@ -1601,7 +1601,7 @@ static void fast_isolate_freepages(struct compact_control *cc) if (!area->nr_free) continue; - can_shootdown = luf_takeoff_start(); + can_shootdown = luf_takeoff_start(cc->zone); spin_lock_irqsave(&cc->zone->lock, flags); freelist = &area->free_list[MIGRATE_MOVABLE]; retry: @@ -2413,7 +2413,7 @@ static enum compact_result compact_finished(struct compact_control *cc) * luf_takeoff_{start,end}() is required to identify whether * this compaction context is tlb shootdownable for luf'd pages. */ - luf_takeoff_start(); + luf_takeoff_start(cc->zone); ret = __compact_finished(cc); luf_takeoff_end(cc->zone); diff --git a/mm/internal.h b/mm/internal.h index e634eaf220f00..fba19c283ac48 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1594,7 +1594,7 @@ static inline void accept_page(struct page *page) #endif /* CONFIG_UNACCEPTED_MEMORY */ #if defined(CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH) extern struct luf_batch luf_batch[]; -bool luf_takeoff_start(void); +bool luf_takeoff_start(struct zone *zone); void luf_takeoff_end(struct zone *zone); bool luf_takeoff_no_shootdown(void); bool luf_takeoff_check(struct zone *zone, struct page *page); @@ -1608,6 +1608,7 @@ static inline bool non_luf_pages_ok(struct zone *zone) return nr_free - nr_luf_pages > min_wm; } + unsigned short fold_unmap_luf(void); /* @@ -1694,7 +1695,7 @@ static inline bool can_luf_vma(struct vm_area_struct *vma) return true; } #else /* CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH */ -static inline bool luf_takeoff_start(void) { return false; } +static inline bool luf_takeoff_start(struct zone *zone) { return false; } static inline void luf_takeoff_end(struct zone *zone) {} static inline bool luf_takeoff_no_shootdown(void) { return true; } static inline bool luf_takeoff_check(struct zone *zone, struct page *page) { return true; } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index b81931c6f2cfd..ccbe49b78190a 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -623,22 +623,25 @@ compaction_capture(struct capture_control *capc, struct page *page, #endif /* CONFIG_COMPACTION */ #if defined(CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH) -static bool no_shootdown_context(void) +static bool no_shootdown_context(struct zone *zone) { /* - * If it performs with irq disabled, that might cause a deadlock. - * Avoid tlb shootdown in this case. + * Tries to avoid tlb shootdown if !preemptible(). However, it + * should be allowed under heavy memory pressure. */ + if (zone && non_luf_pages_ok(zone)) + return !(preemptible() && in_task()); + return !(!irqs_disabled() && in_task()); } /* * Can be called with zone lock released and irq enabled. */ -bool luf_takeoff_start(void) +bool luf_takeoff_start(struct zone *zone) { unsigned long flags; - bool no_shootdown = no_shootdown_context(); + bool no_shootdown = no_shootdown_context(zone); local_irq_save(flags); @@ -2588,7 +2591,7 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac, * luf_takeoff_{start,end}() is required for * get_page_from_free_area() to use luf_takeoff_check(). */ - luf_takeoff_start(); + luf_takeoff_start(zone); spin_lock_irqsave(&zone->lock, flags); for (order = 0; order < NR_PAGE_ORDERS; order++) { struct free_area *area = &(zone->free_area[order]); @@ -2829,7 +2832,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order, unsigned long flags; int i; - luf_takeoff_start(); + luf_takeoff_start(zone); spin_lock_irqsave(&zone->lock, flags); for (i = 0; i < count; ++i) { struct page *page = __rmqueue(zone, order, migratetype, @@ -3455,7 +3458,7 @@ struct page *rmqueue_buddy(struct zone *preferred_zone, struct zone *zone, do { page = NULL; - luf_takeoff_start(); + luf_takeoff_start(zone); spin_lock_irqsave(&zone->lock, flags); if (alloc_flags & ALLOC_HIGHATOMIC) page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC); @@ -3600,7 +3603,7 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone, struct page *page; unsigned long __maybe_unused UP_flags; - luf_takeoff_start(); + luf_takeoff_start(NULL); /* spin_trylock may fail due to a parallel drain or IRQ reentrancy. */ pcp_trylock_prepare(UP_flags); pcp = pcp_spin_trylock(zone->per_cpu_pageset); @@ -5229,7 +5232,7 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid, if (unlikely(!zone)) goto failed; - luf_takeoff_start(); + luf_takeoff_start(NULL); /* spin_trylock may fail due to a parallel drain or IRQ reentrancy. */ pcp_trylock_prepare(UP_flags); pcp = pcp_spin_trylock(zone->per_cpu_pageset); @@ -7418,7 +7421,7 @@ unsigned long __offline_isolated_pages(unsigned long start_pfn, offline_mem_sections(pfn, end_pfn); zone = page_zone(pfn_to_page(pfn)); - luf_takeoff_start(); + luf_takeoff_start(zone); spin_lock_irqsave(&zone->lock, flags); while (pfn < end_pfn) { page = pfn_to_page(pfn); @@ -7536,7 +7539,7 @@ bool take_page_off_buddy(struct page *page) unsigned int order; bool ret = false; - luf_takeoff_start(); + luf_takeoff_start(zone); spin_lock_irqsave(&zone->lock, flags); for (order = 0; order < NR_PAGE_ORDERS; order++) { struct page *page_head = page - (pfn & ((1 << order) - 1)); diff --git a/mm/page_isolation.c b/mm/page_isolation.c index eae33d188762b..ccd36838f9cff 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -211,7 +211,7 @@ static void unset_migratetype_isolate(struct page *page, int migratetype) struct page *buddy; zone = page_zone(page); - luf_takeoff_start(); + luf_takeoff_start(zone); spin_lock_irqsave(&zone->lock, flags); if (!is_migrate_isolate_page(page)) goto out; diff --git a/mm/page_reporting.c b/mm/page_reporting.c index b23d3ed34ec07..83b66e7f0d257 100644 --- a/mm/page_reporting.c +++ b/mm/page_reporting.c @@ -170,7 +170,7 @@ page_reporting_cycle(struct page_reporting_dev_info *prdev, struct zone *zone, if (free_area_empty(area, mt)) return err; - can_shootdown = luf_takeoff_start(); + can_shootdown = luf_takeoff_start(zone); spin_lock_irq(&zone->lock); /* @@ -250,7 +250,7 @@ page_reporting_cycle(struct page_reporting_dev_info *prdev, struct zone *zone, /* update budget to reflect call to report function */ budget--; - luf_takeoff_start(); + luf_takeoff_start(zone); /* reacquire zone lock and resume processing */ spin_lock_irq(&zone->lock);