From patchwork Thu Jan 9 02:13:20 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Long Li X-Patchwork-Id: 13931805 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 886BD3FC7 for ; Thu, 9 Jan 2025 02:18:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.189 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736389102; cv=none; b=n0h7f8jmke4j1VO2n989x4a72NXRu/uvC6en3LTxFZH0I9gWukibJPHyu5U/JpWqyNF4mqnsSEnoXSWR+f6HweGPT6P/qCIBma+cCQhbu/7N6cXAJF+zm3ng2nizyplQ+goSVD5B/VX8lB6GfVWFoxRr5Jgo0CmmIOEJuw+J0ps= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736389102; c=relaxed/simple; bh=/BuLEeoDNjJR9JgKdTDxZ+JKmHKo55d2JJJU2viqXRo=; h=From:To:CC:Subject:Date:Message-ID:MIME-Version:Content-Type; b=W1nQ5shtuGpvSfO7VtYwHhytGvPbskS71FWMGM/Osb4u2QiIiU6WweEzvbc2K7vYnrUXUQmHblaWXehihLdjTfj1YZFKO/12xAhpql4IxChSYdfo1R0q4ep/3ddaFCdECebLXyGiocACHYgsYl/QYRcfrm7Qzkn0ZUMQ6273zwM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.189 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.88.105]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4YT7gN6DRjzRl1Y; Thu, 9 Jan 2025 10:16:00 +0800 (CST) Received: from dggpemf500017.china.huawei.com (unknown [7.185.36.126]) by mail.maildlp.com (Postfix) with ESMTPS id D8BE014010D; Thu, 9 Jan 2025 10:18:16 +0800 (CST) Received: from huawei.com (10.175.104.67) by dggpemf500017.china.huawei.com (7.185.36.126) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Thu, 9 Jan 2025 10:18:16 +0800 From: Long Li To: , CC: , , , , , , Subject: [PATCH v2] xfs: fix mount hang during primary superblock recovery failure Date: Thu, 9 Jan 2025 10:13:20 +0800 Message-ID: <20250109021320.429625-1-leo.lilong@huawei.com> X-Mailer: git-send-email 2.39.2 Precedence: bulk X-Mailing-List: linux-xfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemf500017.china.huawei.com (7.185.36.126) When mounting an image containing a log with sb modifications that require log replay, the mount process hang all the time and stack as follows: [root@localhost ~]# cat /proc/557/stack [<0>] xfs_buftarg_wait+0x31/0x70 [<0>] xfs_buftarg_drain+0x54/0x350 [<0>] xfs_mountfs+0x66e/0xe80 [<0>] xfs_fs_fill_super+0x7f1/0xec0 [<0>] get_tree_bdev_flags+0x186/0x280 [<0>] get_tree_bdev+0x18/0x30 [<0>] xfs_fs_get_tree+0x1d/0x30 [<0>] vfs_get_tree+0x2d/0x110 [<0>] path_mount+0xb59/0xfc0 [<0>] do_mount+0x92/0xc0 [<0>] __x64_sys_mount+0xc2/0x160 [<0>] x64_sys_call+0x2de4/0x45c0 [<0>] do_syscall_64+0xa7/0x240 [<0>] entry_SYSCALL_64_after_hwframe+0x76/0x7e During log recovery, while updating the in-memory superblock from the primary SB buffer, if an error is encountered, such as superblock corruption occurs or some other reasons, we will proceed to out_release and release the xfs_buf. However, this is insufficient because the xfs_buf's log item has already been initialized and the xfs_buf is held by the buffer log item as follows, the xfs_buf will not be released, causing the mount thread to hang. xlog_recover_do_primary_sb_buffer xlog_recover_do_reg_buffer xlog_recover_validate_buf_type xfs_buf_item_init(bp, mp) The solution is straightforward, we simply need to allow it to be handled by the normal buffer write process. The filesystem will be shutdown before the submission of buffer_list in xlog_do_recovery_pass(), ensuring the correct release of the xfs_buf as follows: xlog_do_recovery_pass error = xlog_recover_process xlog_recover_process_data xlog_recover_process_ophdr xlog_recovery_process_trans ... xlog_recover_buf_commit_pass2 error = xlog_recover_do_primary_sb_buffer //Encounter error and return if (error) goto out_writebuf ... out_writebuf: xfs_buf_delwri_queue(bp, buffer_list) //add bp to list return error ... if (!list_empty(&buffer_list)) if (error) xlog_force_shutdown(log, SHUTDOWN_LOG_IO_ERROR); //shutdown first xfs_buf_delwri_submit(&buffer_list); //write buffer in list __xfs_buf_submit if (bp->b_mount->m_log && xlog_is_shutdown(bp->b_mount->m_log)) xfs_buf_ioend_fail(bp) //release bp correctly Fixes: 6a18765b54e2 ("xfs: update the file system geometry after recoverying superblock buffers") Signed-off-by: Long Li Reviewed-by: "Darrick J. Wong" Reviewed-by: Christoph Hellwig --- v1->v2: Add code comments and add the fixed stack description to the commit message. fs/xfs/xfs_buf_item_recover.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/fs/xfs/xfs_buf_item_recover.c b/fs/xfs/xfs_buf_item_recover.c index 3d0c6402cb36..04122bbdd5f3 100644 --- a/fs/xfs/xfs_buf_item_recover.c +++ b/fs/xfs/xfs_buf_item_recover.c @@ -1079,7 +1079,7 @@ xlog_recover_buf_commit_pass2( error = xlog_recover_do_primary_sb_buffer(mp, item, bp, buf_f, current_lsn); if (error) - goto out_release; + goto out_writebuf; /* Update the rt superblock if we have one. */ if (xfs_has_rtsb(mp) && mp->m_rtsb_bp) { @@ -1096,6 +1096,15 @@ xlog_recover_buf_commit_pass2( xlog_recover_do_reg_buffer(mp, item, bp, buf_f, current_lsn); } + /* + * Buffer held by buf log item during 'normal' buffer recovery must + * be committed through buffer I/O submission path to ensure proper + * release. When error occurs during do sb buffer recovery, log + * shutdown will be done before submitting buffer list so that buffers + * can be released correctly through ioend failure path. + */ +out_writebuf: + /* * Perform delayed write on the buffer. Asynchronous writes will be * slower when taking into account all the buffers to be flushed.