From patchwork Sat Jul 15 06:31:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amir Goldstein X-Patchwork-Id: 13314392 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C646C04FE0 for ; Sat, 15 Jul 2023 06:31:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230095AbjGOGbb (ORCPT ); Sat, 15 Jul 2023 02:31:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55218 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230024AbjGOGba (ORCPT ); Sat, 15 Jul 2023 02:31:30 -0400 Received: from mail-lf1-x12e.google.com (mail-lf1-x12e.google.com [IPv6:2a00:1450:4864:20::12e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DF6823589; Fri, 14 Jul 2023 23:31:28 -0700 (PDT) Received: by mail-lf1-x12e.google.com with SMTP id 2adb3069b0e04-4f96d680399so4294838e87.0; Fri, 14 Jul 2023 23:31:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1689402687; x=1691994687; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=2AtTDdgaxKTY/8zm45JtN0U97zPWNFUjkiNwdtmAj/s=; b=HxxkPyUuqwR2vZbCvZ1VGJSASgUjH2ZXqetcdxvocehLhHajJeJ3D9hd9GeOalu85F 8ZEi1MEa6E4nNsLT7fgCtH88Ps7g50kouSFSOrdlrVBEhoD7S4ra0eRD/f2ARmIRK8iv ObvXdji/hMqP4K0Qp/+URtm9/fVhog18H7QIf40E2EESNArdmWJflkALPGAI7hNIF60n z8bPak3R7Z92IAcT/Yr8iWY+An4KeNcmIsp74z8vsc+YsAMdtTx8KJ3LR2ui1RgzAIKQ opjEjqQLAINTetb17YC2QzFZ3QRNHhm128Mb3gO4QzTVwl+iNGdRVR3juOCqFp5c6rsD C61g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689402687; x=1691994687; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2AtTDdgaxKTY/8zm45JtN0U97zPWNFUjkiNwdtmAj/s=; b=NfThJw9KVRo9t2m0NR8d7YalyLyDaZNwEKpltr2jGbgO9kN5caGtQ+QCPvl/413+Wg F73YYo0vm2f8AapA3xGjn01trvxUBvVMWolaPOYEsmR5O2R7T1MuY2RN6Pi4VUew+KBS B0MTk0uJqXhaTEOnQphDp4g/JDLSSB/fXGNXMxqWxQluIeX2CPoLqUbxgSvTQslU7GvE DE+LiZxka7JodZfB6Tcm0EL5h+5sNsy2krgfN6OJsBLmaNJ8gi6/9YuoIwzvh6FGXcU3 TNLRlvo3C98cu8AVP2r3D5S/Uh10vdoFqcDwDJe/EwG9Ux0n9oQBUQGI4sFXaEIgiXAP x3Tw== X-Gm-Message-State: ABy/qLbVBawx2uvOT+OB4YIv9rK6bMeNLqm/6dEOG8Lbb6yaV1DKz0vL hwDGYbNzbAc5joTrqBGwOmhXDYy03cg= X-Google-Smtp-Source: APBJJlFyXvXCAgiSAHt3v9BO/0u9g/snZBtXSUNAoVEXprVo4tZNpumLV4h65ncg/je8FZrS5/mNiA== X-Received: by 2002:a19:6703:0:b0:4f8:7781:9875 with SMTP id b3-20020a196703000000b004f877819875mr5369274lfc.60.1689402687019; Fri, 14 Jul 2023 23:31:27 -0700 (PDT) Received: from amir-ThinkPad-T480.lan ([5.29.249.86]) by smtp.gmail.com with ESMTPSA id m6-20020a7bcb86000000b003fbe36a4ce6sm2957360wmi.10.2023.07.14.23.31.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Jul 2023 23:31:26 -0700 (PDT) From: Amir Goldstein To: Greg Kroah-Hartman Cc: Sasha Levin , Leah Rumancik , Chandan Babu R , "Darrick J . Wong" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, stable@vger.kernel.org, Dave Chinner , Dave Chinner Subject: [PATCH 6.1 4/4] xfs: fix xfs_inodegc_stop racing with mod_delayed_work Date: Sat, 15 Jul 2023 09:31:14 +0300 Message-Id: <20230715063114.1485841-5-amir73il@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230715063114.1485841-1-amir73il@gmail.com> References: <20230715063114.1485841-1-amir73il@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org From: "Darrick J. Wong" commit 2254a7396a0ca6309854948ee1c0a33fa4268cec upstream. syzbot reported this warning from the faux inodegc shrinker that tries to kick off inodegc work: ------------[ cut here ]------------ WARNING: CPU: 1 PID: 102 at kernel/workqueue.c:1445 __queue_work+0xd44/0x1120 kernel/workqueue.c:1444 RIP: 0010:__queue_work+0xd44/0x1120 kernel/workqueue.c:1444 Call Trace: __queue_delayed_work+0x1c8/0x270 kernel/workqueue.c:1672 mod_delayed_work_on+0xe1/0x220 kernel/workqueue.c:1746 xfs_inodegc_shrinker_scan fs/xfs/xfs_icache.c:2212 [inline] xfs_inodegc_shrinker_scan+0x250/0x4f0 fs/xfs/xfs_icache.c:2191 do_shrink_slab+0x428/0xaa0 mm/vmscan.c:853 shrink_slab+0x175/0x660 mm/vmscan.c:1013 shrink_one+0x502/0x810 mm/vmscan.c:5343 shrink_many mm/vmscan.c:5394 [inline] lru_gen_shrink_node mm/vmscan.c:5511 [inline] shrink_node+0x2064/0x35f0 mm/vmscan.c:6459 kswapd_shrink_node mm/vmscan.c:7262 [inline] balance_pgdat+0xa02/0x1ac0 mm/vmscan.c:7452 kswapd+0x677/0xd60 mm/vmscan.c:7712 kthread+0x2e8/0x3a0 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308 This warning corresponds to this code in __queue_work: /* * For a draining wq, only works from the same workqueue are * allowed. The __WQ_DESTROYING helps to spot the issue that * queues a new work item to a wq after destroy_workqueue(wq). */ if (unlikely(wq->flags & (__WQ_DESTROYING | __WQ_DRAINING) && WARN_ON_ONCE(!is_chained_work(wq)))) return; For this to trip, we must have a thread draining the inodedgc workqueue and a second thread trying to queue inodegc work to that workqueue. This can happen if freezing or a ro remount race with reclaim poking our faux inodegc shrinker and another thread dropping an unlinked O_RDONLY file: Thread 0 Thread 1 Thread 2 xfs_inodegc_stop xfs_inodegc_shrinker_scan xfs_is_inodegc_enabled xfs_clear_inodegc_enabled xfs_inodegc_queue_all xfs_inodegc_queue xfs_is_inodegc_enabled drain_workqueue llist_empty mod_delayed_work_on(..., 0) __queue_work In other words, everything between the access to inodegc_enabled state and the decision to poke the inodegc workqueue requires some kind of coordination to avoid the WQ_DRAINING state. We could perhaps introduce a lock here, but we could also try to eliminate WQ_DRAINING from the picture. We could replace the drain_workqueue call with a loop that flushes the workqueue and queues workers as long as there is at least one inode present in the per-cpu inodegc llists. We've disabled inodegc at this point, so we know that the number of queued inodes will eventually hit zero as long as xfs_inodegc_start cannot reactivate the workers. There are four callers of xfs_inodegc_start. Three of them come from the VFS with s_umount held: filesystem thawing, failed filesystem freezing, and the rw remount transition. The fourth caller is mounting rw (no remount or freezing possible). There are three callers ofs xfs_inodegc_stop. One is unmounting (no remount or thaw possible). Two of them come from the VFS with s_umount held: fs freezing and ro remount transition. Hence, it is correct to replace the drain_workqueue call with a loop that drains the inodegc llists. Fixes: 6191cf3ad59f ("xfs: flush inodegc workqueue tasks before cancel") Signed-off-by: Darrick J. Wong Reviewed-by: Dave Chinner Signed-off-by: Dave Chinner Signed-off-by: Amir Goldstein Acked-by: Darrick J. Wong --- fs/xfs/xfs_icache.c | 32 +++++++++++++++++++++++++++----- 1 file changed, 27 insertions(+), 5 deletions(-) diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c index 7ce262dcabca..d884cba1d707 100644 --- a/fs/xfs/xfs_icache.c +++ b/fs/xfs/xfs_icache.c @@ -431,18 +431,23 @@ xfs_iget_check_free_state( } /* Make all pending inactivation work start immediately. */ -static void +static bool xfs_inodegc_queue_all( struct xfs_mount *mp) { struct xfs_inodegc *gc; int cpu; + bool ret = false; for_each_online_cpu(cpu) { gc = per_cpu_ptr(mp->m_inodegc, cpu); - if (!llist_empty(&gc->list)) + if (!llist_empty(&gc->list)) { mod_delayed_work_on(cpu, mp->m_inodegc_wq, &gc->work, 0); + ret = true; + } } + + return ret; } /* @@ -1894,24 +1899,41 @@ xfs_inodegc_flush( /* * Flush all the pending work and then disable the inode inactivation background - * workers and wait for them to stop. + * workers and wait for them to stop. Caller must hold sb->s_umount to + * coordinate changes in the inodegc_enabled state. */ void xfs_inodegc_stop( struct xfs_mount *mp) { + bool rerun; + if (!xfs_clear_inodegc_enabled(mp)) return; + /* + * Drain all pending inodegc work, including inodes that could be + * queued by racing xfs_inodegc_queue or xfs_inodegc_shrinker_scan + * threads that sample the inodegc state just prior to us clearing it. + * The inodegc flag state prevents new threads from queuing more + * inodes, so we queue pending work items and flush the workqueue until + * all inodegc lists are empty. IOWs, we cannot use drain_workqueue + * here because it does not allow other unserialized mechanisms to + * reschedule inodegc work while this draining is in progress. + */ xfs_inodegc_queue_all(mp); - drain_workqueue(mp->m_inodegc_wq); + do { + flush_workqueue(mp->m_inodegc_wq); + rerun = xfs_inodegc_queue_all(mp); + } while (rerun); trace_xfs_inodegc_stop(mp, __return_address); } /* * Enable the inode inactivation background workers and schedule deferred inode - * inactivation work if there is any. + * inactivation work if there is any. Caller must hold sb->s_umount to + * coordinate changes in the inodegc_enabled state. */ void xfs_inodegc_start(