From patchwork Tue Apr 16 09:31:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "zhaoyang.huang" X-Patchwork-Id: 13631568 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BAB8BC4345F for ; Tue, 16 Apr 2024 09:32:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2AB176B0087; Tue, 16 Apr 2024 05:32:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 25B006B0088; Tue, 16 Apr 2024 05:32:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 14A216B0089; Tue, 16 Apr 2024 05:32:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id EA8E86B0087 for ; Tue, 16 Apr 2024 05:32:11 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id AD92D1409A3 for ; Tue, 16 Apr 2024 09:32:11 +0000 (UTC) X-FDA: 82014878862.28.AC260CF Received: from SHSQR01.spreadtrum.com (unknown [222.66.158.135]) by imf01.hostedemail.com (Postfix) with ESMTP id 8F50840004 for ; Tue, 16 Apr 2024 09:32:08 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf01.hostedemail.com: domain of zhaoyang.huang@unisoc.com designates 222.66.158.135 as permitted sender) smtp.mailfrom=zhaoyang.huang@unisoc.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713259929; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=gExy2JMG9XZgu2m8P/l9U284LioIClKD5z8RIUgyqMI=; b=NKnspjob068pNFFRdRx5BKR/i/4ecW9U8LmyJRngm2OSdITf+CBvkcoQIGcmGcqH8zWZyk 4cyJV6WrzQaQgm0q9H3/asVQWnUhJv/dAQTMZXTMG9wRHmLkH/oB48BG9s4tY//kMA0sCK RvoxEE96ZUINC1dPyQFTfpBbFgSr4jg= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf01.hostedemail.com: domain of zhaoyang.huang@unisoc.com designates 222.66.158.135 as permitted sender) smtp.mailfrom=zhaoyang.huang@unisoc.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713259929; a=rsa-sha256; cv=none; b=adhIG8ARR1zLNjzOPAySA1d748bYtxgt4nYtnoWuvBgbCH7cTJu7vVfGGEk9Yc39wX+LJt I6dENoY6jJNGACa/bk6Udc1QD2Cnf+Q2sT8XkXJ3zEOAihnN+d756tAz4AoB+xtjsysb4w VJjeHZrVWXN68kxAeFT/5FL1x62WpVA= Received: from dlp.unisoc.com ([10.29.3.86]) by SHSQR01.spreadtrum.com with ESMTP id 43G9VWfo079005; Tue, 16 Apr 2024 17:31:32 +0800 (+08) (envelope-from zhaoyang.huang@unisoc.com) Received: from SHDLP.spreadtrum.com (bjmbx01.spreadtrum.com [10.0.64.7]) by dlp.unisoc.com (SkyGuard) with ESMTPS id 4VJdyt6hZzz2K65Zj; Tue, 16 Apr 2024 17:29:10 +0800 (CST) Received: from bj03382pcu01.spreadtrum.com (10.0.73.40) by BJMBX01.spreadtrum.com (10.0.64.7) with Microsoft SMTP Server (TLS) id 15.0.1497.23; Tue, 16 Apr 2024 17:31:30 +0800 From: "zhaoyang.huang" To: CC: , , , , Subject: [PATCH 1/1] mm: protect xa split stuff under lruvec->lru_lock during migration Date: Tue, 16 Apr 2024 17:31:21 +0800 Message-ID: <20240416093121.313486-1-zhaoyang.huang@unisoc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <95d6033195a781f81e6ad5bd46026aae@hycu.com> References: <95d6033195a781f81e6ad5bd46026aae@hycu.com> MIME-Version: 1.0 X-Originating-IP: [10.0.73.40] X-ClientProxiedBy: SHCAS03.spreadtrum.com (10.0.1.207) To BJMBX01.spreadtrum.com (10.0.64.7) X-MAIL: SHSQR01.spreadtrum.com 43G9VWfo079005 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 8F50840004 X-Stat-Signature: gf1p6f43b4bjqukr3cqportmmjwayz8d X-Rspam-User: X-HE-Tag: 1713259928-662585 X-HE-Meta: U2FsdGVkX1+PDhIbac8YQOaOwsPXDfJAsM1T1XZTqjI2A1gBRHgQqR3SvWBcfX4h2Nrfjg2Qh4VLYI1JRRXJ2w+SXYOKijPGWMphO4H46k5KoQxRj1HOXVCjYSUdz+uQt9p1AYIeuwR2hSK9PZgZyCQR8woCuuOHXH8QWIYnLfEADeW2FCp5Fo5Gt1NjzzWu1L84bA07n5WVZAKFFIsuZ3nxoIN/0sk86OmZF+PdWZuRy9XNZP/xJKEU+jmcLTGyXeVUbpt8K6ktmM6pOvJFZP7oqOL7bth/NHpsPL9oldgJI9/rXO7Y429raiLarin2YxaZz+8LUbAbgpvZqkR9Xqz1QbpHzGFP1bEROV8CvL+vwZhB+xN7yWZnqqa+u+MIKUOkq7HJTjvusY3a1l02XqiAC586odtp0jfN809DQAgwBQZZBsd7jXfUaGqjL/KapFHfb/o17qpZIZ5z24ZYfcog/FGD4Y7HBsaUsPRdtDSsFU0IyLSXH5p48czhu7uNiBEF6Zuv0rQ+OXpesN4GBbuvVe2MjgKlqjAavqX2dTf2Zz0dDuDyeudukLVB0ylyI1a1OQ+It9/QHoI6/23QtMH85V+YpHDQr6UCuDux9wlYaclelfWE/tYnXU/OXb6rvO77s0epTNlJA2oXSmjr/RxqAMKpum+gs0DRAH7nPONuGqDc5ae4aVakUQASMbMutYx8HKTJzEt0Nmw5zjBLCDHAjUGxqabp7R8UU9EsHRziOgZe0MMSLNGx+zilhdBswBnUcjTyGN4bGfeVv9Lokh0whkaomfJaIzFS+iWIo4AxukBwQd4gg4VdBMjssPyhM+qLpTnBeYAYfW3tru/YbK3Lo5B97tSrD8/D3rBjyEiMLawRkDpT611V8PGTbbbwkzHmgsWNaVQsduSvoVoP1g+/npZbDAmGOtjiEyG9b141sYW0/WOj9j22Gr3Uwek2b/mO0NxGv0yYRl+ZYLg 0jNM1lcI AREqWEqyyYv7uHqY9bktir0xQX56cD8G8L9TG7iHkVwslTffcviE1VAhjsWM0mW0VMA9SM4tAdjp8Um76i7bdR6+GT1PR0IXwyzCurtyA6+AlUs5/UKacajouZpDjT2S55JUor7qqlImgQ+/79m2vLdtfor7te9tL1mLKbkR5PLXWjC++s8nS/U90Tn2HZg/k1YVRM2s0GF9yIHIWWp+m8suhaEbcqF06TyeAPfBR8P9tnvLqZi6hniasSw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Zhaoyang Huang Livelock in [1] is reported multitimes since v515, where the zero-ref folio is repeatly found on the page cache by find_get_entry. A possible timing sequence is proposed in [2], which can be described briefly as the lockless xarray operation could get harmed by an illegal folio remaining on the slot[offset]. This commit would like to protect the xa split stuff(folio_ref_freeze and __split_huge_page) under lruvec->lock to remove the race window. [1] [167789.800297] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks: [167726.780305] rcu: Tasks blocked on level-0 rcu_node (CPUs 0-7): P155 [167726.780319] (detected by 3, t=17256977 jiffies, g=19883597, q=2397394) [167726.780325] task:kswapd0 state:R running task stack: 24 pid: 155 ppid: 2 flags:0x00000008 [167789.800308] rcu: Tasks blocked on level-0 rcu_node (CPUs 0-7): P155 [167789.800322] (detected by 3, t=17272732 jiffies, g=19883597, q=2397470) [167789.800328] task:kswapd0 state:R running task stack: 24 pid: 155 ppid: 2 flags:0x00000008 [167789.800339] Call trace: [167789.800342] dump_backtrace.cfi_jt+0x0/0x8 [167789.800355] show_stack+0x1c/0x2c [167789.800363] sched_show_task+0x1ac/0x27c [167789.800370] print_other_cpu_stall+0x314/0x4dc [167789.800377] check_cpu_stall+0x1c4/0x36c [167789.800382] rcu_sched_clock_irq+0xe8/0x388 [167789.800389] update_process_times+0xa0/0xe0 [167789.800396] tick_sched_timer+0x7c/0xd4 [167789.800404] __run_hrtimer+0xd8/0x30c [167789.800408] hrtimer_interrupt+0x1e4/0x2d0 [167789.800414] arch_timer_handler_phys+0x5c/0xa0 [167789.800423] handle_percpu_devid_irq+0xbc/0x318 [167789.800430] handle_domain_irq+0x7c/0xf0 [167789.800437] gic_handle_irq+0x54/0x12c [167789.800445] call_on_irq_stack+0x40/0x70 [167789.800451] do_interrupt_handler+0x44/0xa0 [167789.800457] el1_interrupt+0x34/0x64 [167789.800464] el1h_64_irq_handler+0x1c/0x2c [167789.800470] el1h_64_irq+0x7c/0x80 [167789.800474] xas_find+0xb4/0x28c [167789.800481] find_get_entry+0x3c/0x178 [167789.800487] find_lock_entries+0x98/0x2f8 [167789.800492] __invalidate_mapping_pages.llvm.3657204692649320853+0xc8/0x224 [167789.800500] invalidate_mapping_pages+0x18/0x28 [167789.800506] inode_lru_isolate+0x140/0x2a4 [167789.800512] __list_lru_walk_one+0xd8/0x204 [167789.800519] list_lru_walk_one+0x64/0x90 [167789.800524] prune_icache_sb+0x54/0xe0 [167789.800529] super_cache_scan+0x160/0x1ec [167789.800535] do_shrink_slab+0x20c/0x5c0 [167789.800541] shrink_slab+0xf0/0x20c [167789.800546] shrink_node_memcgs+0x98/0x320 [167789.800553] shrink_node+0xe8/0x45c [167789.800557] balance_pgdat+0x464/0x814 [167789.800563] kswapd+0xfc/0x23c [167789.800567] kthread+0x164/0x1c8 [167789.800573] ret_from_fork+0x10/0x20 [2] Thread_isolate: 1. alloc_contig_range->isolate_migratepages_block isolate a certain of pages to cc->migratepages via pfn (folio has refcount: 1 + n (alloc_pages, page_cache)) 2. alloc_contig_range->migrate_pages->folio_ref_freeze(folio, 1 + extra_pins) set the folio->refcnt to 0 3. alloc_contig_range->migrate_pages->xas_split split the folios to each slot as folio from slot[offset] to slot[offset + sibs] 4. alloc_contig_range->migrate_pages->__split_huge_page->folio_lruvec_lock failed which have the folio be failed in setting refcnt to 2 5. Thread_kswapd enter the livelock by the chain below rcu_read_lock(); retry: find_get_entry folio = xas_find if(!folio_try_get_rcu) xas_reset; goto retry; rcu_read_unlock(); 5'. Thread_holdlock as the lruvec->lru_lock holder could be stalled in the same core of Thread_kswapd. Signed-off-by: Zhaoyang Huang --- mm/huge_memory.c | 19 ++++++++++++++----- 1 file changed, 14 insertions(+), 5 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 9859aa4f7553..418e8d03480a 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2891,7 +2891,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, { struct folio *folio = page_folio(page); struct page *head = &folio->page; - struct lruvec *lruvec; + struct lruvec *lruvec = folio_lruvec(folio); struct address_space *swap_cache = NULL; unsigned long offset = 0; int i, nr_dropped = 0; @@ -2908,8 +2908,6 @@ static void __split_huge_page(struct page *page, struct list_head *list, xa_lock(&swap_cache->i_pages); } - /* lock lru list/PageCompound, ref frozen by page_ref_freeze */ - lruvec = folio_lruvec_lock(folio); ClearPageHasHWPoisoned(head); @@ -2942,7 +2940,6 @@ static void __split_huge_page(struct page *page, struct list_head *list, folio_set_order(new_folio, new_order); } - unlock_page_lruvec(lruvec); /* Caller disabled irqs, so they are still disabled here */ split_page_owner(head, order, new_order); @@ -2961,7 +2958,6 @@ static void __split_huge_page(struct page *page, struct list_head *list, folio_ref_add(folio, 1 + new_nr); xa_unlock(&folio->mapping->i_pages); } - local_irq_enable(); if (nr_dropped) shmem_uncharge(folio->mapping->host, nr_dropped); @@ -3048,6 +3044,7 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, int extra_pins, ret; pgoff_t end; bool is_hzp; + struct lruvec *lruvec; VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); VM_BUG_ON_FOLIO(!folio_test_large(folio), folio); @@ -3159,6 +3156,14 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, /* block interrupt reentry in xa_lock and spinlock */ local_irq_disable(); + + /* + * take lruvec's lock before freeze the folio to prevent the folio + * remains in the page cache with refcnt == 0, which could lead to + * find_get_entry enters livelock by iterating the xarray. + */ + lruvec = folio_lruvec_lock(folio); + if (mapping) { /* * Check if the folio is present in page cache. @@ -3203,12 +3208,16 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, } __split_huge_page(page, list, end, new_order); + unlock_page_lruvec(lruvec); + local_irq_enable(); ret = 0; } else { spin_unlock(&ds_queue->split_queue_lock); fail: if (mapping) xas_unlock(&xas); + + unlock_page_lruvec(lruvec); local_irq_enable(); remap_page(folio, folio_nr_pages(folio)); ret = -EAGAIN;