From patchwork Wed Feb 8 17:52:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leah Rumancik X-Patchwork-Id: 13133487 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F3BBC636D3 for ; Wed, 8 Feb 2023 17:52:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229953AbjBHRwn (ORCPT ); Wed, 8 Feb 2023 12:52:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60012 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230094AbjBHRwm (ORCPT ); Wed, 8 Feb 2023 12:52:42 -0500 Received: from mail-pl1-x62a.google.com (mail-pl1-x62a.google.com [IPv6:2607:f8b0:4864:20::62a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3E19659E2 for ; Wed, 8 Feb 2023 09:52:41 -0800 (PST) Received: by mail-pl1-x62a.google.com with SMTP id v23so20205725plo.1 for ; Wed, 08 Feb 2023 09:52:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=MzEhli+EE9D6xeYPh4OGejU4RxhhphZrw/a50+FrANE=; b=JYXC7wgCC9vGOnjABtjwSdQXb6pWKXdDTTOk0I0icftxQGH1dzLfdYxQiiakHvi9Fr G5M8xUrFHOkdrk177FumBG/DDU6m4AklgTn4ik3L5Cpz+7hLsrpwvK+lLO95oTKi89k2 XLTR//dmYuc9U6PK9bcNMrD3pdpTmhwGpj86qb+lZbsIH9oN4ZqU4fDyrWevG9C1QaFe RlcC7AQSpZHNe5s4KX+f3tApfLZdjCdx3tdzaCrOX3QEh4a7IXudwIkO7lGo2WHRRdkO PkDOBJx+ZOsj79i9OdcGpXl50a3T/BVwwlpARSWXWqVP+AseTG+Qrc7fU0umemEuh2MX wB4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MzEhli+EE9D6xeYPh4OGejU4RxhhphZrw/a50+FrANE=; b=kyhucbmCIsQZtAoVJMB2j1KGcLTrcgF5M6cICE4gXgiCFTXOgsQJFkYiXM4dNtnqu9 GDsfyBVCDURCyugMrPI0z2rR4D0EdU2B1X8HVkPhIV2buLLRd7aT4BvpQaTI+EuNrzYH HwqgTRGZJKW5MXBppQE/cDCX9XxKGL7R4kjQnGCNgq4wZCij5fqpvNUKgD69lcJNI3j7 8LeX70DbMNDKT5C3LkSzTKyqov+BgVYYg2PnvEk9eZ/C692tecIDxJcSfEP45b4Dbb9C Jlkbroh1Zk1JP1foknOpjJk3dEpQdmAXMhZvqniQBS+bkWlKQwHS85H3QFFabdkR/56C LD9w== X-Gm-Message-State: AO0yUKUkFd/7kn8tE6CSCbxwyG8JkmkErqmPYYUc12XEz5hWqtDLW3ax Q9SHztQTl6PWG7H5AjAPogtxyMANFsxyMQ== X-Google-Smtp-Source: AK7set8uF5C66OccrR+JYFtT3hd3s8g5F1ptT63YkqB2jIwsKaOVeUlh79xpUQ8IioI4Tz3vBf4NSQ== X-Received: by 2002:a17:903:404c:b0:199:320c:41ab with SMTP id n12-20020a170903404c00b00199320c41abmr5342339pla.56.1675878760404; Wed, 08 Feb 2023 09:52:40 -0800 (PST) Received: from lrumancik.svl.corp.google.com ([2620:15c:2d4:203:726:5e6d:fcde:4245]) by smtp.gmail.com with ESMTPSA id y17-20020a170902d65100b00198e397994bsm10911452plh.136.2023.02.08.09.52.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Feb 2023 09:52:40 -0800 (PST) From: Leah Rumancik To: linux-xfs@vger.kernel.org Cc: amir73il@gmail.com, chandan.babu@oracle.com, Dave Chinner , "Darrick J . Wong" , Allison Henderson , Dave Chinner , Leah Rumancik Subject: [PATCH 5.15 CANDIDATE 01/10] xfs: zero inode fork buffer at allocation Date: Wed, 8 Feb 2023 09:52:19 -0800 Message-Id: <20230208175228.2226263-2-leah.rumancik@gmail.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog In-Reply-To: <20230208175228.2226263-1-leah.rumancik@gmail.com> References: <20230208175228.2226263-1-leah.rumancik@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org From: Dave Chinner [ Upstream commit cb512c921639613ce03f87e62c5e93ed9fe8c84d ] When we first allocate or resize an inline inode fork, we round up the allocation to 4 byte alingment to make journal alignment constraints. We don't clear the unused bytes, so we can copy up to three uninitialised bytes into the journal. Zero those bytes so we only ever copy zeros into the journal. Signed-off-by: Dave Chinner Reviewed-by: Darrick J. Wong Reviewed-by: Allison Henderson Signed-off-by: Dave Chinner Signed-off-by: Leah Rumancik --- fs/xfs/libxfs/xfs_inode_fork.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/fs/xfs/libxfs/xfs_inode_fork.c b/fs/xfs/libxfs/xfs_inode_fork.c index 1d174909f9bd..20095233d7bc 100644 --- a/fs/xfs/libxfs/xfs_inode_fork.c +++ b/fs/xfs/libxfs/xfs_inode_fork.c @@ -50,8 +50,13 @@ xfs_init_local_fork( mem_size++; if (size) { + /* + * As we round up the allocation here, we need to ensure the + * bytes we don't copy data into are zeroed because the log + * vectors still copy them into the journal. + */ real_size = roundup(mem_size, 4); - ifp->if_u1.if_data = kmem_alloc(real_size, KM_NOFS); + ifp->if_u1.if_data = kmem_zalloc(real_size, KM_NOFS); memcpy(ifp->if_u1.if_data, data, size); if (zero_terminate) ifp->if_u1.if_data[size] = '\0'; @@ -500,10 +505,11 @@ xfs_idata_realloc( /* * For inline data, the underlying buffer must be a multiple of 4 bytes * in size so that it can be logged and stay on word boundaries. - * We enforce that here. + * We enforce that here, and use __GFP_ZERO to ensure that size + * extensions always zero the unused roundup area. */ ifp->if_u1.if_data = krealloc(ifp->if_u1.if_data, roundup(new_size, 4), - GFP_NOFS | __GFP_NOFAIL); + GFP_NOFS | __GFP_NOFAIL | __GFP_ZERO); ifp->if_bytes = new_size; } From patchwork Wed Feb 8 17:52:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leah Rumancik X-Patchwork-Id: 13133490 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C516DC636CC for ; Wed, 8 Feb 2023 17:52:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230094AbjBHRwo (ORCPT ); Wed, 8 Feb 2023 12:52:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60014 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230353AbjBHRwn (ORCPT ); Wed, 8 Feb 2023 12:52:43 -0500 Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com [IPv6:2607:f8b0:4864:20::102d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 48CDB6A64 for ; Wed, 8 Feb 2023 09:52:42 -0800 (PST) Received: by mail-pj1-x102d.google.com with SMTP id f15-20020a17090ac28f00b00230a32f0c9eso3098460pjt.4 for ; Wed, 08 Feb 2023 09:52:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=4zaCZqG/9Te1S4SLOTcFbt+IncxgVAgVwl1b3HxBTaM=; b=U6uDQK23exe1TGSi7ij0zE1dEuzunO3lMaVQPLhVIOr0Dt8LqJYIuTd9rb46km8JQW /9sgsB7W1DRVl0pQMAPDTfAN3qn0m7WVVMnj/Ziuxs4VrMvukXuo2/GSOqOgoUndyJec KjIYrXskXITH80Tp65xselLFD1/BlEN1YQ7y1nZzcu8VyChuTmhGYHbhT23tBmu/q9oH 0BqAoLgGi8IIfcR/r1YmFs2lYlMM+kcfI//JweEgP/NTk2BX28nBw6r9YEoQqZ4AVi70 JBVrYUBHaFxW6dwgrA05OaaEeWUokUnHzTX1SCw0ymbQhJD/O9fWmpSUiUgHW2VG7D3y gyLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4zaCZqG/9Te1S4SLOTcFbt+IncxgVAgVwl1b3HxBTaM=; b=CCE8i8T5vHPjhY5WYvPL7EginX9OEcbUzN8r3YjUBZFKe98W79YeEsN7K4i24UEFpg F+b/kgkfhf6V5L/ENud6Il6lKnD5rDx790+sX3ANHzojvHU44bVA6lz/Mz7OX/VDgqcz PoI50+4yJLNJwTzxvsc8jBEhGeX24GeMaoxfmrcyW4Izzd5pnMZWMZB9rg0IpKTEBbRi iuP+RMF0LeVUk930BAM+wEcdVrreg560eViukC/SJPu1H05OcYrdkAvEShSuFq/5D2w3 JVlmmoDuXKTIo0O0zuKyEQiYdX58xwhZvxPSCXH7ejFl5EXT+8poS3de7E1so9quxcHA tOMg== X-Gm-Message-State: AO0yUKVBlJ6x2WnNjuzvbXgo/BnyXEXwqvtN1YbMwcoz81eIfhqH62b6 GDjq6hplkJ5FAP7Igimuyl3+TzhXo6kywQ== X-Google-Smtp-Source: AK7set/h18P8sxQ1Y9t4cPrNTh0ObWqSMEV0mhKZzlLpwcFTjxdKtvwjjLxrFCbT1P6s0XWH771Udw== X-Received: by 2002:a17:902:db0a:b0:199:15bb:8320 with SMTP id m10-20020a170902db0a00b0019915bb8320mr8566270plx.31.1675878761525; Wed, 08 Feb 2023 09:52:41 -0800 (PST) Received: from lrumancik.svl.corp.google.com ([2620:15c:2d4:203:726:5e6d:fcde:4245]) by smtp.gmail.com with ESMTPSA id y17-20020a170902d65100b00198e397994bsm10911452plh.136.2023.02.08.09.52.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Feb 2023 09:52:41 -0800 (PST) From: Leah Rumancik To: linux-xfs@vger.kernel.org Cc: amir73il@gmail.com, chandan.babu@oracle.com, Dave Chinner , "Darrick J . Wong" , Allison Henderson , Dave Chinner , Leah Rumancik Subject: [PATCH 5.15 CANDIDATE 02/10] xfs: fix potential log item leak Date: Wed, 8 Feb 2023 09:52:20 -0800 Message-Id: <20230208175228.2226263-3-leah.rumancik@gmail.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog In-Reply-To: <20230208175228.2226263-1-leah.rumancik@gmail.com> References: <20230208175228.2226263-1-leah.rumancik@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org From: Dave Chinner [ Upstream commit c230a4a85bcdbfc1a7415deec6caf04e8fca1301 ] Ever since we added shadown format buffers to the log items, log items need to handle the item being released with shadow buffers attached. Due to the fact this requirement was added at the same time we added new rmap/reflink intents, we missed the cleanup of those items. In theory, this means shadow buffers can be leaked in a very small window when a shutdown is initiated. Testing with KASAN shows this leak does not happen in practice - we haven't identified a single leak in several years of shutdown testing since ~v4.8 kernels. However, the intent whiteout cleanup mechanism results in every cancelled intent in exactly the same state as this tiny race window creates and so if intents down clean up shadow buffers on final release we will leak the shadow buffer for just about every intent we create. Hence we start with this patch to close this condition off and ensure that when whiteouts start to be used we don't leak lots of memory. Signed-off-by: Dave Chinner Reviewed-by: Darrick J. Wong Reviewed-by: Allison Henderson Signed-off-by: Dave Chinner Signed-off-by: Leah Rumancik --- fs/xfs/xfs_bmap_item.c | 2 ++ fs/xfs/xfs_icreate_item.c | 1 + fs/xfs/xfs_refcount_item.c | 2 ++ fs/xfs/xfs_rmap_item.c | 2 ++ 4 files changed, 7 insertions(+) diff --git a/fs/xfs/xfs_bmap_item.c b/fs/xfs/xfs_bmap_item.c index 03159970133f..51ffdec5e4fa 100644 --- a/fs/xfs/xfs_bmap_item.c +++ b/fs/xfs/xfs_bmap_item.c @@ -39,6 +39,7 @@ STATIC void xfs_bui_item_free( struct xfs_bui_log_item *buip) { + kmem_free(buip->bui_item.li_lv_shadow); kmem_cache_free(xfs_bui_zone, buip); } @@ -198,6 +199,7 @@ xfs_bud_item_release( struct xfs_bud_log_item *budp = BUD_ITEM(lip); xfs_bui_release(budp->bud_buip); + kmem_free(budp->bud_item.li_lv_shadow); kmem_cache_free(xfs_bud_zone, budp); } diff --git a/fs/xfs/xfs_icreate_item.c b/fs/xfs/xfs_icreate_item.c index 017904a34c02..c265ae20946d 100644 --- a/fs/xfs/xfs_icreate_item.c +++ b/fs/xfs/xfs_icreate_item.c @@ -63,6 +63,7 @@ STATIC void xfs_icreate_item_release( struct xfs_log_item *lip) { + kmem_free(ICR_ITEM(lip)->ic_item.li_lv_shadow); kmem_cache_free(xfs_icreate_zone, ICR_ITEM(lip)); } diff --git a/fs/xfs/xfs_refcount_item.c b/fs/xfs/xfs_refcount_item.c index 46904b793bd4..8ef842d17916 100644 --- a/fs/xfs/xfs_refcount_item.c +++ b/fs/xfs/xfs_refcount_item.c @@ -35,6 +35,7 @@ STATIC void xfs_cui_item_free( struct xfs_cui_log_item *cuip) { + kmem_free(cuip->cui_item.li_lv_shadow); if (cuip->cui_format.cui_nextents > XFS_CUI_MAX_FAST_EXTENTS) kmem_free(cuip); else @@ -204,6 +205,7 @@ xfs_cud_item_release( struct xfs_cud_log_item *cudp = CUD_ITEM(lip); xfs_cui_release(cudp->cud_cuip); + kmem_free(cudp->cud_item.li_lv_shadow); kmem_cache_free(xfs_cud_zone, cudp); } diff --git a/fs/xfs/xfs_rmap_item.c b/fs/xfs/xfs_rmap_item.c index 5f0695980467..15e7b01740a7 100644 --- a/fs/xfs/xfs_rmap_item.c +++ b/fs/xfs/xfs_rmap_item.c @@ -35,6 +35,7 @@ STATIC void xfs_rui_item_free( struct xfs_rui_log_item *ruip) { + kmem_free(ruip->rui_item.li_lv_shadow); if (ruip->rui_format.rui_nextents > XFS_RUI_MAX_FAST_EXTENTS) kmem_free(ruip); else @@ -227,6 +228,7 @@ xfs_rud_item_release( struct xfs_rud_log_item *rudp = RUD_ITEM(lip); xfs_rui_release(rudp->rud_ruip); + kmem_free(rudp->rud_item.li_lv_shadow); kmem_cache_free(xfs_rud_zone, rudp); } From patchwork Wed Feb 8 17:52:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leah Rumancik X-Patchwork-Id: 13133488 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8AA83C636D7 for ; Wed, 8 Feb 2023 17:52:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230353AbjBHRwp (ORCPT ); Wed, 8 Feb 2023 12:52:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60040 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230377AbjBHRwo (ORCPT ); Wed, 8 Feb 2023 12:52:44 -0500 Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com [IPv6:2607:f8b0:4864:20::102d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9383B59E2 for ; Wed, 8 Feb 2023 09:52:43 -0800 (PST) Received: by mail-pj1-x102d.google.com with SMTP id bx22so16241818pjb.3 for ; Wed, 08 Feb 2023 09:52:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Q8Kt5e9eFuHaUHQVJUsJnYzgwinUdun/p02DUjewDYY=; b=PvUN/+43lsNdWDWHu8Mh8CEEmvKra7e8Kj0aMxHpHwev3HHOW4b7eOinGzbMCrwtp0 7mcVJU3Aw7TgfnrVq8v5bTTF8ZoxsIk0GnsnPkbpoL+Rr/ca9879P/TcfPKvb9ZeAWNJ QuJMpgRI0THznTfcJVyrana/Gd058xawJqd9cxIDQzbqnCPeAR/7H3pDmjyZBklvfjh9 TD+7G+bNphkgGxuiVQo7bFviRNVf+flcw4+w9TxZg8QgpHRbCnhzxcGZOVLOshbqsWzF XCFTCi+j1v+fFanytaAzg9KFXyQ65xCCY0Drz8OQJKI7a9LRXAF8YZkCzr+j0r+o5Iti 0dAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Q8Kt5e9eFuHaUHQVJUsJnYzgwinUdun/p02DUjewDYY=; b=WWu4i5Wkbg/Df335Ct1BBnUZ/oNSyK0CktzAeF3JZLMDX+WFE6vRUXKYioc3/IHeow y59JbAhZExPzyNeOFbMDYWaiyuX6daFHldGdgMso1U1ls4CDTMJt8wUPYZ8g/nWL4+v5 GxiL4gK5iexG1NGW98mOaXRg/E2uPnHbr++KazzXuhgZwv9uOZhfu55tBuWzNI+eY6XA 0CKCIkZb8NEXkY17OQBFqItwDBgW/KwRzXY037KoTDqJwohUWKuGleikHvQFqkD4zUay QDoYLVNbvLrOmEsNpMjZ/BIz60ozU/mvkwWLp/ZmjZiAb2DA10YGT6bkX+/7eSWx8fBU vc7A== X-Gm-Message-State: AO0yUKWn3RYa/jg7aK+IysBo0WkwsninZrRYYVUVVRwlR4EXugUEOQ5G OREJIjEuJ+nC9JMaoPQtqN42Sm2HXk/VNA== X-Google-Smtp-Source: AK7set8f7A+ZlPJdf/3aC2O9dXmbfty0gPqN5pBfSoFRmsXSfZk4bKvBIyxv1bVrqUb6NDCKu7HeLQ== X-Received: by 2002:a17:902:ca0b:b0:194:d4e5:5e5c with SMTP id w11-20020a170902ca0b00b00194d4e55e5cmr1985590pld.37.1675878762843; Wed, 08 Feb 2023 09:52:42 -0800 (PST) Received: from lrumancik.svl.corp.google.com ([2620:15c:2d4:203:726:5e6d:fcde:4245]) by smtp.gmail.com with ESMTPSA id y17-20020a170902d65100b00198e397994bsm10911452plh.136.2023.02.08.09.52.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Feb 2023 09:52:42 -0800 (PST) From: Leah Rumancik To: linux-xfs@vger.kernel.org Cc: amir73il@gmail.com, chandan.babu@oracle.com, Dave Chinner , Christoph Hellwig , "Darrick J . Wong" , Dave Chinner , Leah Rumancik Subject: [PATCH 5.15 CANDIDATE 03/10] xfs: detect self referencing btree sibling pointers Date: Wed, 8 Feb 2023 09:52:21 -0800 Message-Id: <20230208175228.2226263-4-leah.rumancik@gmail.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog In-Reply-To: <20230208175228.2226263-1-leah.rumancik@gmail.com> References: <20230208175228.2226263-1-leah.rumancik@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org From: Dave Chinner [ Upstream commit dc04db2aa7c9307e740d6d0e173085301c173b1a ] To catch the obvious graph cycle problem and hence potential endless looping. Signed-off-by: Dave Chinner Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong Signed-off-by: Dave Chinner Signed-off-by: Leah Rumancik --- fs/xfs/libxfs/xfs_btree.c | 140 ++++++++++++++++++++++++++++---------- 1 file changed, 105 insertions(+), 35 deletions(-) diff --git a/fs/xfs/libxfs/xfs_btree.c b/fs/xfs/libxfs/xfs_btree.c index 298395481713..5bec048343b0 100644 --- a/fs/xfs/libxfs/xfs_btree.c +++ b/fs/xfs/libxfs/xfs_btree.c @@ -51,6 +51,52 @@ xfs_btree_magic( return magic; } +static xfs_failaddr_t +xfs_btree_check_lblock_siblings( + struct xfs_mount *mp, + struct xfs_btree_cur *cur, + int level, + xfs_fsblock_t fsb, + xfs_fsblock_t sibling) +{ + if (sibling == NULLFSBLOCK) + return NULL; + if (sibling == fsb) + return __this_address; + if (level >= 0) { + if (!xfs_btree_check_lptr(cur, sibling, level + 1)) + return __this_address; + } else { + if (!xfs_verify_fsbno(mp, sibling)) + return __this_address; + } + + return NULL; +} + +static xfs_failaddr_t +xfs_btree_check_sblock_siblings( + struct xfs_mount *mp, + struct xfs_btree_cur *cur, + int level, + xfs_agnumber_t agno, + xfs_agblock_t agbno, + xfs_agblock_t sibling) +{ + if (sibling == NULLAGBLOCK) + return NULL; + if (sibling == agbno) + return __this_address; + if (level >= 0) { + if (!xfs_btree_check_sptr(cur, sibling, level + 1)) + return __this_address; + } else { + if (!xfs_verify_agbno(mp, agno, sibling)) + return __this_address; + } + return NULL; +} + /* * Check a long btree block header. Return the address of the failing check, * or NULL if everything is ok. @@ -65,6 +111,8 @@ __xfs_btree_check_lblock( struct xfs_mount *mp = cur->bc_mp; xfs_btnum_t btnum = cur->bc_btnum; int crc = xfs_has_crc(mp); + xfs_failaddr_t fa; + xfs_fsblock_t fsb = NULLFSBLOCK; if (crc) { if (!uuid_equal(&block->bb_u.l.bb_uuid, &mp->m_sb.sb_meta_uuid)) @@ -83,16 +131,16 @@ __xfs_btree_check_lblock( if (be16_to_cpu(block->bb_numrecs) > cur->bc_ops->get_maxrecs(cur, level)) return __this_address; - if (block->bb_u.l.bb_leftsib != cpu_to_be64(NULLFSBLOCK) && - !xfs_btree_check_lptr(cur, be64_to_cpu(block->bb_u.l.bb_leftsib), - level + 1)) - return __this_address; - if (block->bb_u.l.bb_rightsib != cpu_to_be64(NULLFSBLOCK) && - !xfs_btree_check_lptr(cur, be64_to_cpu(block->bb_u.l.bb_rightsib), - level + 1)) - return __this_address; - return NULL; + if (bp) + fsb = XFS_DADDR_TO_FSB(mp, xfs_buf_daddr(bp)); + + fa = xfs_btree_check_lblock_siblings(mp, cur, level, fsb, + be64_to_cpu(block->bb_u.l.bb_leftsib)); + if (!fa) + fa = xfs_btree_check_lblock_siblings(mp, cur, level, fsb, + be64_to_cpu(block->bb_u.l.bb_rightsib)); + return fa; } /* Check a long btree block header. */ @@ -130,6 +178,9 @@ __xfs_btree_check_sblock( struct xfs_mount *mp = cur->bc_mp; xfs_btnum_t btnum = cur->bc_btnum; int crc = xfs_has_crc(mp); + xfs_failaddr_t fa; + xfs_agblock_t agbno = NULLAGBLOCK; + xfs_agnumber_t agno = NULLAGNUMBER; if (crc) { if (!uuid_equal(&block->bb_u.s.bb_uuid, &mp->m_sb.sb_meta_uuid)) @@ -146,16 +197,18 @@ __xfs_btree_check_sblock( if (be16_to_cpu(block->bb_numrecs) > cur->bc_ops->get_maxrecs(cur, level)) return __this_address; - if (block->bb_u.s.bb_leftsib != cpu_to_be32(NULLAGBLOCK) && - !xfs_btree_check_sptr(cur, be32_to_cpu(block->bb_u.s.bb_leftsib), - level + 1)) - return __this_address; - if (block->bb_u.s.bb_rightsib != cpu_to_be32(NULLAGBLOCK) && - !xfs_btree_check_sptr(cur, be32_to_cpu(block->bb_u.s.bb_rightsib), - level + 1)) - return __this_address; - return NULL; + if (bp) { + agbno = xfs_daddr_to_agbno(mp, xfs_buf_daddr(bp)); + agno = xfs_daddr_to_agno(mp, xfs_buf_daddr(bp)); + } + + fa = xfs_btree_check_sblock_siblings(mp, cur, level, agno, agbno, + be32_to_cpu(block->bb_u.s.bb_leftsib)); + if (!fa) + fa = xfs_btree_check_sblock_siblings(mp, cur, level, agno, + agbno, be32_to_cpu(block->bb_u.s.bb_rightsib)); + return fa; } /* Check a short btree block header. */ @@ -4265,6 +4318,21 @@ xfs_btree_visit_block( if (xfs_btree_ptr_is_null(cur, &rptr)) return -ENOENT; + /* + * We only visit blocks once in this walk, so we have to avoid the + * internal xfs_btree_lookup_get_block() optimisation where it will + * return the same block without checking if the right sibling points + * back to us and creates a cyclic reference in the btree. + */ + if (cur->bc_flags & XFS_BTREE_LONG_PTRS) { + if (be64_to_cpu(rptr.l) == XFS_DADDR_TO_FSB(cur->bc_mp, + xfs_buf_daddr(bp))) + return -EFSCORRUPTED; + } else { + if (be32_to_cpu(rptr.s) == xfs_daddr_to_agbno(cur->bc_mp, + xfs_buf_daddr(bp))) + return -EFSCORRUPTED; + } return xfs_btree_lookup_get_block(cur, level, &rptr, &block); } @@ -4439,20 +4507,21 @@ xfs_btree_lblock_verify( { struct xfs_mount *mp = bp->b_mount; struct xfs_btree_block *block = XFS_BUF_TO_BLOCK(bp); + xfs_fsblock_t fsb; + xfs_failaddr_t fa; /* numrecs verification */ if (be16_to_cpu(block->bb_numrecs) > max_recs) return __this_address; /* sibling pointer verification */ - if (block->bb_u.l.bb_leftsib != cpu_to_be64(NULLFSBLOCK) && - !xfs_verify_fsbno(mp, be64_to_cpu(block->bb_u.l.bb_leftsib))) - return __this_address; - if (block->bb_u.l.bb_rightsib != cpu_to_be64(NULLFSBLOCK) && - !xfs_verify_fsbno(mp, be64_to_cpu(block->bb_u.l.bb_rightsib))) - return __this_address; - - return NULL; + fsb = XFS_DADDR_TO_FSB(mp, xfs_buf_daddr(bp)); + fa = xfs_btree_check_lblock_siblings(mp, NULL, -1, fsb, + be64_to_cpu(block->bb_u.l.bb_leftsib)); + if (!fa) + fa = xfs_btree_check_lblock_siblings(mp, NULL, -1, fsb, + be64_to_cpu(block->bb_u.l.bb_rightsib)); + return fa; } /** @@ -4493,7 +4562,9 @@ xfs_btree_sblock_verify( { struct xfs_mount *mp = bp->b_mount; struct xfs_btree_block *block = XFS_BUF_TO_BLOCK(bp); - xfs_agblock_t agno; + xfs_agnumber_t agno; + xfs_agblock_t agbno; + xfs_failaddr_t fa; /* numrecs verification */ if (be16_to_cpu(block->bb_numrecs) > max_recs) @@ -4501,14 +4572,13 @@ xfs_btree_sblock_verify( /* sibling pointer verification */ agno = xfs_daddr_to_agno(mp, xfs_buf_daddr(bp)); - if (block->bb_u.s.bb_leftsib != cpu_to_be32(NULLAGBLOCK) && - !xfs_verify_agbno(mp, agno, be32_to_cpu(block->bb_u.s.bb_leftsib))) - return __this_address; - if (block->bb_u.s.bb_rightsib != cpu_to_be32(NULLAGBLOCK) && - !xfs_verify_agbno(mp, agno, be32_to_cpu(block->bb_u.s.bb_rightsib))) - return __this_address; - - return NULL; + agbno = xfs_daddr_to_agbno(mp, xfs_buf_daddr(bp)); + fa = xfs_btree_check_sblock_siblings(mp, NULL, -1, agno, agbno, + be32_to_cpu(block->bb_u.s.bb_leftsib)); + if (!fa) + fa = xfs_btree_check_sblock_siblings(mp, NULL, -1, agno, agbno, + be32_to_cpu(block->bb_u.s.bb_rightsib)); + return fa; } /* From patchwork Wed Feb 8 17:52:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leah Rumancik X-Patchwork-Id: 13133489 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3AE91C636D3 for ; Wed, 8 Feb 2023 17:52:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230407AbjBHRwq (ORCPT ); Wed, 8 Feb 2023 12:52:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60054 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230421AbjBHRwp (ORCPT ); Wed, 8 Feb 2023 12:52:45 -0500 Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [IPv6:2607:f8b0:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A649445F6F for ; Wed, 8 Feb 2023 09:52:44 -0800 (PST) Received: by mail-pl1-x636.google.com with SMTP id k13so20224162plg.0 for ; Wed, 08 Feb 2023 09:52:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=B9onRQALc3Snrm18hU7Fy9uxMDBR1lixJy8MQTLDR7Y=; b=pHtWDDR5S6zLwAXdqoBi80d6SMtB8I4a33W823Ks8izCXzvLTLR6AGZXFrc3XJcAqv Dj8hCHiyJK1/gmsoJOeqDzvYxzD9LzXzFUoX69vVWhDNsRujKAHYaO6sk5g5XF6W+Nsn +q0PXnjKXMwuwAobecqt7g06dyoH0hBNjOGRL4gE0W/cH0/1b92JhX0fXxgExMdRLaJZ tE+tOj9Td5BexFNkjM3cx6bYKWI3WjpGPalKeA1/mmp3Y1nJzWuRoqWp+xA2Z6XdRhfs YSYIKiVuuoD0CP+psr0aflSlNxk8OvQJ+1t3bwW7L3PNxSTp+8Cpcib2C8xNC3ciUTiN gTxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=B9onRQALc3Snrm18hU7Fy9uxMDBR1lixJy8MQTLDR7Y=; b=8AEZOzaE+Rww6Ls9VdIONBA0DhjbAliJ4XDdoBRwBvtzHMoiumU5DNc9uTx0IsP9fi IAwEvWzWeWWaakifehNAjSTp3uwltJ3USfKBPsrnd75G+cQPJPWuUyfJP/6Jwm+eHfB9 qGwyyXr9G5P+tzhAqMbLiJMSo0VkFXKsVl04XRe1p+/VhBUvCqHxaC/1BYOOsEQa8MpT HFiYt6rNVhiKiiz7POTLw8QKqqxo0wHZguWPE4w68WbT+mVMNQoml/EgQDRNrvaYz7Hf T684r7z6tb/9qMfXDdwkxIhAiICh3BLcnQ/Y7Zh9froIDR1+rw6W/lrEEipXsh9Trj8g kf1g== X-Gm-Message-State: AO0yUKXQH8fI8mg1muxsMfvJGlXtyu2Pg3OcwlP96Yavx+nuTg7s10D9 lDr3NGSS6dnFY+lcNnaNdGv2X3ihraslDw== X-Google-Smtp-Source: AK7set+pIQi/evVYeO+1BUh/BYsr8luAlDs9sr9IDenHeyB6AJu6CmzapFGKvX8cYd/WK+pc68rsmg== X-Received: by 2002:a17:902:d501:b0:198:b945:4107 with SMTP id b1-20020a170902d50100b00198b9454107mr8433967plg.65.1675878763933; Wed, 08 Feb 2023 09:52:43 -0800 (PST) Received: from lrumancik.svl.corp.google.com ([2620:15c:2d4:203:726:5e6d:fcde:4245]) by smtp.gmail.com with ESMTPSA id y17-20020a170902d65100b00198e397994bsm10911452plh.136.2023.02.08.09.52.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Feb 2023 09:52:43 -0800 (PST) From: Leah Rumancik To: linux-xfs@vger.kernel.org Cc: amir73il@gmail.com, chandan.babu@oracle.com, Dave Chinner , Christoph Hellwig , "Darrick J . Wong" , Dave Chinner , Leah Rumancik Subject: [PATCH 5.15 CANDIDATE 04/10] xfs: set XFS_FEAT_NLINK correctly Date: Wed, 8 Feb 2023 09:52:22 -0800 Message-Id: <20230208175228.2226263-5-leah.rumancik@gmail.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog In-Reply-To: <20230208175228.2226263-1-leah.rumancik@gmail.com> References: <20230208175228.2226263-1-leah.rumancik@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org From: Dave Chinner [ Upstream commit dd0d2f9755191690541b09e6385d0f8cd8bc9d8f ] While xfs_has_nlink() is not used in kernel, it is used in userspace (e.g. by xfs_db) so we need to set the XFS_FEAT_NLINK flag correctly in xfs_sb_version_to_features(). Signed-off-by: Dave Chinner Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong Signed-off-by: Dave Chinner Signed-off-by: Leah Rumancik --- fs/xfs/libxfs/xfs_sb.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/fs/xfs/libxfs/xfs_sb.c b/fs/xfs/libxfs/xfs_sb.c index e58349be78bd..72c05485c870 100644 --- a/fs/xfs/libxfs/xfs_sb.c +++ b/fs/xfs/libxfs/xfs_sb.c @@ -70,6 +70,8 @@ xfs_sb_version_to_features( /* optional V4 features */ if (sbp->sb_rblocks > 0) features |= XFS_FEAT_REALTIME; + if (sbp->sb_versionnum & XFS_SB_VERSION_NLINKBIT) + features |= XFS_FEAT_NLINK; if (sbp->sb_versionnum & XFS_SB_VERSION_ATTRBIT) features |= XFS_FEAT_ATTR; if (sbp->sb_versionnum & XFS_SB_VERSION_QUOTABIT) From patchwork Wed Feb 8 17:52:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leah Rumancik X-Patchwork-Id: 13133491 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7BC72C05027 for ; Wed, 8 Feb 2023 17:52:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230377AbjBHRwr (ORCPT ); Wed, 8 Feb 2023 12:52:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60070 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230421AbjBHRwq (ORCPT ); Wed, 8 Feb 2023 12:52:46 -0500 Received: from mail-pl1-x633.google.com (mail-pl1-x633.google.com [IPv6:2607:f8b0:4864:20::633]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A99827AB7 for ; Wed, 8 Feb 2023 09:52:45 -0800 (PST) Received: by mail-pl1-x633.google.com with SMTP id i2so6783821ple.13 for ; Wed, 08 Feb 2023 09:52:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=TMifxjfrKC/qXYdEhYAkRn8dKe1kAeNkoWBqs0MUaZA=; b=jF3bNfd5Efuy6r7dJybNunJgE9/flqA/BcrydsU9yV0nsCMwKiHX65o5k3LJzRu0pQ LSxaGLuE9kKUNmETUFP10jlIp3vBuGX9UP3L7CMdsGfjXrPIOYHkcVpc+K5MIGB4h0ir 8E/RkJ9r0dR/HQ2p5Wn9gzhxb22vcxy/MFVA+PKpZSg+KtIEzzpnUfInrN01ITIDRTxh 27rzPtrt9YcFl5AU2/J1EnNWq156k6hQU/l1CgymcCIZ/dVjbugOPrsUbKvXvpimqPUs e5SmCcdMskRHlY1enSmWMtW0c01go02n3ZU0wFRmZJ5i7sxKQehoSPXdEatewHGb/Xzo UG7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=TMifxjfrKC/qXYdEhYAkRn8dKe1kAeNkoWBqs0MUaZA=; b=PK9qG24ycbvO5d4VDwvIvtykYPr7IbU4EXoTtak0wvrGlmtl4FYqqFGKRkpwQ3Yoee SxqYGhM1myUFsAK7HuDG4YYecHUi+Kgn636MczBkKOpDrN4lmNWnva4BaTTI7Ofgrf9X k0pfrzBCETKSbsizvVaUV1bcodat1XkMBdcVd1eXw9JNmWCGKyNzHJbsB42JMXLQtEk6 e5EeFIWNLrpi0l0jZAhphDIS1H6qvGBV+Lq1ryaTXMeUHRfMt1sS7oH2dOc5iZyG0hdo 6KWKYlkNG+2UCq5LclorwfKBn9c1wf+BfY9VEaBixHyEIxjY0HklKs1w/x4khsa2Bu5L G3dg== X-Gm-Message-State: AO0yUKVnnEZHcwj/IA07d/yHtHTpklxaAGFe3qKw3SNc8TB9lAUo/pwD R2cFfeofx3JCNzF8A+UoKO7Z5AdG9UMjpw== X-Google-Smtp-Source: AK7set8yJqhoPDqhqDi/ohCvM5y7Hu6H8uaBZijwQfpe/KajVHz2vozH4JvPwnsI6y4LrvKQEW0yQQ== X-Received: by 2002:a17:902:e2c2:b0:198:e94a:64b5 with SMTP id l2-20020a170902e2c200b00198e94a64b5mr6222395plc.10.1675878764993; Wed, 08 Feb 2023 09:52:44 -0800 (PST) Received: from lrumancik.svl.corp.google.com ([2620:15c:2d4:203:726:5e6d:fcde:4245]) by smtp.gmail.com with ESMTPSA id y17-20020a170902d65100b00198e397994bsm10911452plh.136.2023.02.08.09.52.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Feb 2023 09:52:44 -0800 (PST) From: Leah Rumancik To: linux-xfs@vger.kernel.org Cc: amir73il@gmail.com, chandan.babu@oracle.com, Dave Chinner , Christoph Hellwig , "Darrick J . Wong" , Dave Chinner , Leah Rumancik Subject: [PATCH 5.15 CANDIDATE 05/10] xfs: validate v5 feature fields Date: Wed, 8 Feb 2023 09:52:23 -0800 Message-Id: <20230208175228.2226263-6-leah.rumancik@gmail.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog In-Reply-To: <20230208175228.2226263-1-leah.rumancik@gmail.com> References: <20230208175228.2226263-1-leah.rumancik@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org From: Dave Chinner [ Upstream commit f0f5f658065a5af09126ec892e4c383540a1c77f ] We don't check that the v4 feature flags taht v5 requires to be set are actually set anywhere. Do this check when we see that the filesystem is a v5 filesystem. Signed-off-by: Dave Chinner Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong Signed-off-by: Dave Chinner Signed-off-by: Leah Rumancik --- fs/xfs/libxfs/xfs_sb.c | 68 +++++++++++++++++++++++++++++++++++------- 1 file changed, 58 insertions(+), 10 deletions(-) diff --git a/fs/xfs/libxfs/xfs_sb.c b/fs/xfs/libxfs/xfs_sb.c index 72c05485c870..04e2a57313fa 100644 --- a/fs/xfs/libxfs/xfs_sb.c +++ b/fs/xfs/libxfs/xfs_sb.c @@ -30,6 +30,47 @@ * Physical superblock buffer manipulations. Shared with libxfs in userspace. */ +/* + * Check that all the V4 feature bits that the V5 filesystem format requires are + * correctly set. + */ +static bool +xfs_sb_validate_v5_features( + struct xfs_sb *sbp) +{ + /* We must not have any unknown V4 feature bits set */ + if (sbp->sb_versionnum & ~XFS_SB_VERSION_OKBITS) + return false; + + /* + * The CRC bit is considered an invalid V4 flag, so we have to add it + * manually to the OKBITS mask. + */ + if (sbp->sb_features2 & ~(XFS_SB_VERSION2_OKBITS | + XFS_SB_VERSION2_CRCBIT)) + return false; + + /* Now check all the required V4 feature flags are set. */ + +#define V5_VERS_FLAGS (XFS_SB_VERSION_NLINKBIT | \ + XFS_SB_VERSION_ALIGNBIT | \ + XFS_SB_VERSION_LOGV2BIT | \ + XFS_SB_VERSION_EXTFLGBIT | \ + XFS_SB_VERSION_DIRV2BIT | \ + XFS_SB_VERSION_MOREBITSBIT) + +#define V5_FEAT_FLAGS (XFS_SB_VERSION2_LAZYSBCOUNTBIT | \ + XFS_SB_VERSION2_ATTR2BIT | \ + XFS_SB_VERSION2_PROJID32BIT | \ + XFS_SB_VERSION2_CRCBIT) + + if ((sbp->sb_versionnum & V5_VERS_FLAGS) != V5_VERS_FLAGS) + return false; + if ((sbp->sb_features2 & V5_FEAT_FLAGS) != V5_FEAT_FLAGS) + return false; + return true; +} + /* * We support all XFS versions newer than a v4 superblock with V2 directories. */ @@ -37,9 +78,19 @@ bool xfs_sb_good_version( struct xfs_sb *sbp) { - /* all v5 filesystems are supported */ + /* + * All v5 filesystems are supported, but we must check that all the + * required v4 feature flags are enabled correctly as the code checks + * those flags and not for v5 support. + */ if (xfs_sb_is_v5(sbp)) - return true; + return xfs_sb_validate_v5_features(sbp); + + /* We must not have any unknown v4 feature bits set */ + if ((sbp->sb_versionnum & ~XFS_SB_VERSION_OKBITS) || + ((sbp->sb_versionnum & XFS_SB_VERSION_MOREBITSBIT) && + (sbp->sb_features2 & ~XFS_SB_VERSION2_OKBITS))) + return false; /* versions prior to v4 are not supported */ if (XFS_SB_VERSION_NUM(sbp) < XFS_SB_VERSION_4) @@ -51,12 +102,6 @@ xfs_sb_good_version( if (!(sbp->sb_versionnum & XFS_SB_VERSION_EXTFLGBIT)) return false; - /* And must not have any unknown v4 feature bits set */ - if ((sbp->sb_versionnum & ~XFS_SB_VERSION_OKBITS) || - ((sbp->sb_versionnum & XFS_SB_VERSION_MOREBITSBIT) && - (sbp->sb_features2 & ~XFS_SB_VERSION2_OKBITS))) - return false; - /* It's a supported v4 filesystem */ return true; } @@ -264,12 +309,15 @@ xfs_validate_sb_common( bool has_dalign; if (!xfs_verify_magic(bp, dsb->sb_magicnum)) { - xfs_warn(mp, "bad magic number"); + xfs_warn(mp, +"Superblock has bad magic number 0x%x. Not an XFS filesystem?", + be32_to_cpu(dsb->sb_magicnum)); return -EWRONGFS; } if (!xfs_sb_good_version(sbp)) { - xfs_warn(mp, "bad version"); + xfs_warn(mp, +"Superblock has unknown features enabled or corrupted feature masks."); return -EWRONGFS; } From patchwork Wed Feb 8 17:52:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leah Rumancik X-Patchwork-Id: 13133492 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A09CFC636D3 for ; Wed, 8 Feb 2023 17:52:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230421AbjBHRws (ORCPT ); Wed, 8 Feb 2023 12:52:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60088 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230393AbjBHRwr (ORCPT ); Wed, 8 Feb 2023 12:52:47 -0500 Received: from mail-pj1-x1035.google.com (mail-pj1-x1035.google.com [IPv6:2607:f8b0:4864:20::1035]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 026E445F6F for ; Wed, 8 Feb 2023 09:52:47 -0800 (PST) Received: by mail-pj1-x1035.google.com with SMTP id pj3so19160405pjb.1 for ; Wed, 08 Feb 2023 09:52:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=WyQfx+wsgdhns+st33QwWoatL2y0i9H7oBpPoGKnb1A=; b=QuYcKXkIJDumD/EjF3W3fojCCr/mQCARKAONmpweDdchapAvJMGUIgAZio49i/Res/ 3DfemCDYrGajqjgULiPzMU1dXdY1aShmsQiIb2Qj5l6BvH14GShwMU1QRF32hBQQOsO5 t5MWclRefX8V8zpMv8PSq83i2t+HdWJw9QY+iG9IjH2a+8htDmsUbj/Bs8i3LinqQEng D5eXade2WjBXiU7sO6cASUefXbvXBUH5GirV5HJL5Rpq64X9UkBBqmGepFNVlohDQVC1 +3BbhZRpMywDnv+JonNGu9j8g64tNlRUKSqdnDzKgg5kMFUSTIF/C8QT95TlxRS2VSCH 52rA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=WyQfx+wsgdhns+st33QwWoatL2y0i9H7oBpPoGKnb1A=; b=vHCd5tbzIqRe3tcPTVZmGFZ/4IskZTVSWANkp/A///EMqF4Y7L7WICxCE7kOXh9E+T WyUJ22srGTPcJfY0o+NrbQyS2wukNRxlknBtcgB/kivDfTpl4bZdx2oyejmQPrHeXmil wh28ua1YdvCIva6+Q1Fw6vKcvVs/QwG04JX1s56/buRHNrXOzlJfvHjawH75liLzFSBU CO2HJmLZpUfGuJAR7HMbaJCu6uhMKBpZIu1xTsanGHTsk9EzzzMQ/VczBf7t4PsMcO3R Tzu19BdkbbuG9Gk5Aq43RNzpfFKmWcB5FRGABq1WecCaQXkC3DkvSTK74EaRL7DTYRoV hZFA== X-Gm-Message-State: AO0yUKWLO+B/HJhJ9dhs2mqe1UkFCO3zen2t5cc4aj3capF2ZBRwCbr7 NJi87XWHSJAxbHdKazwnewBoFGazfHk1lA== X-Google-Smtp-Source: AK7set8wJDuJOJ2tsguhybthPdTf3V93JcXJ/IxjLIAQBF6OWDGtRlEfWisyF1mt+5c5v2eGWWfHCA== X-Received: by 2002:a17:902:d50b:b0:196:7906:b4e with SMTP id b11-20020a170902d50b00b0019679060b4emr10408268plg.19.1675878766193; Wed, 08 Feb 2023 09:52:46 -0800 (PST) Received: from lrumancik.svl.corp.google.com ([2620:15c:2d4:203:726:5e6d:fcde:4245]) by smtp.gmail.com with ESMTPSA id y17-20020a170902d65100b00198e397994bsm10911452plh.136.2023.02.08.09.52.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Feb 2023 09:52:45 -0800 (PST) From: Leah Rumancik To: linux-xfs@vger.kernel.org Cc: amir73il@gmail.com, chandan.babu@oracle.com, Dave Chinner , kernel test robot , "Darrick J . Wong" , Christoph Hellwig , Dave Chinner , Leah Rumancik Subject: [PATCH 5.15 CANDIDATE 06/10] xfs: avoid unnecessary runtime sibling pointer endian conversions Date: Wed, 8 Feb 2023 09:52:24 -0800 Message-Id: <20230208175228.2226263-7-leah.rumancik@gmail.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog In-Reply-To: <20230208175228.2226263-1-leah.rumancik@gmail.com> References: <20230208175228.2226263-1-leah.rumancik@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org From: Dave Chinner [ Upstream commit 5672225e8f2a872a22b0cecedba7a6644af1fb84 ] Commit dc04db2aa7c9 has caused a small aim7 regression, showing a small increase in CPU usage in __xfs_btree_check_sblock() as a result of the extra checking. This is likely due to the endian conversion of the sibling poitners being unconditional instead of relying on the compiler to endian convert the NULL pointer at compile time and avoiding the runtime conversion for this common case. Rework the checks so that endian conversion of the sibling pointers is only done if they are not null as the original code did. .... and these need to be "inline" because the compiler completely fails to inline them automatically like it should be doing. $ size fs/xfs/libxfs/xfs_btree.o* text data bss dec hex filename 51874 240 0 52114 cb92 fs/xfs/libxfs/xfs_btree.o.orig 51562 240 0 51802 ca5a fs/xfs/libxfs/xfs_btree.o.inline Just when you think the tools have advanced sufficiently we don't have to care about stuff like this anymore, along comes a reminder that *our tools still suck*. Fixes: dc04db2aa7c9 ("xfs: detect self referencing btree sibling pointers") Reported-by: kernel test robot Signed-off-by: Dave Chinner Reviewed-by: Darrick J. Wong Reviewed-by: Christoph Hellwig Signed-off-by: Dave Chinner Signed-off-by: Leah Rumancik --- fs/xfs/libxfs/xfs_btree.c | 47 +++++++++++++++++++++++++++------------ 1 file changed, 33 insertions(+), 14 deletions(-) diff --git a/fs/xfs/libxfs/xfs_btree.c b/fs/xfs/libxfs/xfs_btree.c index 5bec048343b0..b4b5bf4bfed7 100644 --- a/fs/xfs/libxfs/xfs_btree.c +++ b/fs/xfs/libxfs/xfs_btree.c @@ -51,16 +51,31 @@ xfs_btree_magic( return magic; } -static xfs_failaddr_t +/* + * These sibling pointer checks are optimised for null sibling pointers. This + * happens a lot, and we don't need to byte swap at runtime if the sibling + * pointer is NULL. + * + * These are explicitly marked at inline because the cost of calling them as + * functions instead of inlining them is about 36 bytes extra code per call site + * on x86-64. Yes, gcc-11 fails to inline them, and explicit inlining of these + * two sibling check functions reduces the compiled code size by over 300 + * bytes. + */ +static inline xfs_failaddr_t xfs_btree_check_lblock_siblings( struct xfs_mount *mp, struct xfs_btree_cur *cur, int level, xfs_fsblock_t fsb, - xfs_fsblock_t sibling) + __be64 dsibling) { - if (sibling == NULLFSBLOCK) + xfs_fsblock_t sibling; + + if (dsibling == cpu_to_be64(NULLFSBLOCK)) return NULL; + + sibling = be64_to_cpu(dsibling); if (sibling == fsb) return __this_address; if (level >= 0) { @@ -74,17 +89,21 @@ xfs_btree_check_lblock_siblings( return NULL; } -static xfs_failaddr_t +static inline xfs_failaddr_t xfs_btree_check_sblock_siblings( struct xfs_mount *mp, struct xfs_btree_cur *cur, int level, xfs_agnumber_t agno, xfs_agblock_t agbno, - xfs_agblock_t sibling) + __be32 dsibling) { - if (sibling == NULLAGBLOCK) + xfs_agblock_t sibling; + + if (dsibling == cpu_to_be32(NULLAGBLOCK)) return NULL; + + sibling = be32_to_cpu(dsibling); if (sibling == agbno) return __this_address; if (level >= 0) { @@ -136,10 +155,10 @@ __xfs_btree_check_lblock( fsb = XFS_DADDR_TO_FSB(mp, xfs_buf_daddr(bp)); fa = xfs_btree_check_lblock_siblings(mp, cur, level, fsb, - be64_to_cpu(block->bb_u.l.bb_leftsib)); + block->bb_u.l.bb_leftsib); if (!fa) fa = xfs_btree_check_lblock_siblings(mp, cur, level, fsb, - be64_to_cpu(block->bb_u.l.bb_rightsib)); + block->bb_u.l.bb_rightsib); return fa; } @@ -204,10 +223,10 @@ __xfs_btree_check_sblock( } fa = xfs_btree_check_sblock_siblings(mp, cur, level, agno, agbno, - be32_to_cpu(block->bb_u.s.bb_leftsib)); + block->bb_u.s.bb_leftsib); if (!fa) fa = xfs_btree_check_sblock_siblings(mp, cur, level, agno, - agbno, be32_to_cpu(block->bb_u.s.bb_rightsib)); + agbno, block->bb_u.s.bb_rightsib); return fa; } @@ -4517,10 +4536,10 @@ xfs_btree_lblock_verify( /* sibling pointer verification */ fsb = XFS_DADDR_TO_FSB(mp, xfs_buf_daddr(bp)); fa = xfs_btree_check_lblock_siblings(mp, NULL, -1, fsb, - be64_to_cpu(block->bb_u.l.bb_leftsib)); + block->bb_u.l.bb_leftsib); if (!fa) fa = xfs_btree_check_lblock_siblings(mp, NULL, -1, fsb, - be64_to_cpu(block->bb_u.l.bb_rightsib)); + block->bb_u.l.bb_rightsib); return fa; } @@ -4574,10 +4593,10 @@ xfs_btree_sblock_verify( agno = xfs_daddr_to_agno(mp, xfs_buf_daddr(bp)); agbno = xfs_daddr_to_agbno(mp, xfs_buf_daddr(bp)); fa = xfs_btree_check_sblock_siblings(mp, NULL, -1, agno, agbno, - be32_to_cpu(block->bb_u.s.bb_leftsib)); + block->bb_u.s.bb_leftsib); if (!fa) fa = xfs_btree_check_sblock_siblings(mp, NULL, -1, agno, agbno, - be32_to_cpu(block->bb_u.s.bb_rightsib)); + block->bb_u.s.bb_rightsib); return fa; } From patchwork Wed Feb 8 17:52:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leah Rumancik X-Patchwork-Id: 13133493 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79E77C636D7 for ; Wed, 8 Feb 2023 17:52:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230393AbjBHRwt (ORCPT ); Wed, 8 Feb 2023 12:52:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60096 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230446AbjBHRws (ORCPT ); Wed, 8 Feb 2023 12:52:48 -0500 Received: from mail-pj1-x102c.google.com (mail-pj1-x102c.google.com [IPv6:2607:f8b0:4864:20::102c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2A7B8768D for ; Wed, 8 Feb 2023 09:52:48 -0800 (PST) Received: by mail-pj1-x102c.google.com with SMTP id c10-20020a17090a1d0a00b0022e63a94799so3344616pjd.2 for ; Wed, 08 Feb 2023 09:52:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=p7yWFoqQq2NPWGEKTtGbqsiRmniixxNrgpmfGjs0GqY=; b=oSK5zRt5IjL8CDt5zqqjGZhpPYL+07FBFobV7ibT5ywppjoJY7dVSHtBd8ILtJdhhY x7KrHl33/A9zb3Iok0Xq0NaN+W5B1REU+pHGOPMpSangYWiUCAP55NPKN2eW/jLkcuKw 2+WdETKwP5lc29TUYpPvFzPqEbdLwdfqF8H6ObDTxHaQAOwmmmUkSitw6mXJweKnaiT6 lpGGcRKWJoOMxmjuRC9nXAvd5F3hiMrgdALUcxCTkIUI9cJ1/BYttUcxuDU9BMtnVyZD NNNG960O29mwXIx2tXanp2B9N7MwASng72LVVRukg0pCVs/L0ONzIPKDYNGLuH+WiOmh s9Pg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=p7yWFoqQq2NPWGEKTtGbqsiRmniixxNrgpmfGjs0GqY=; b=Cjau7oI448JtDT8KP9T6GOgWsM7UzQ2DJeeAzz0bn0xmOT6mxsXkJTWMmDXn5gW0kH UJs8a6i5w/pssYeZZm24cNxH6Awcssist+8XGa/TWtcMQcyefuAVzIMSVYN+MYPVPB+q UPg3i/dwoaPixXhGFAXBzGkfiKnzAieXriG09YXn3nCuMm+2JCdTsEdhWcP3s7PX9Vah 56ygIzJoagl1vFHawdx516Di/D6+hX5Prnfy8gRm+QDHMRDzWhhTqUhrRwU95SYaE1qD fduWeddkBOUzs0+Ojg77lyU6MupqcYDPNdHJzH1cDzvZuDGTy/Sh/6IXjdIefVkbXCPR Y5IQ== X-Gm-Message-State: AO0yUKUwt5ZRoeblqiOMWwrgUJfEvNT8g7Q5u/a6mfD9mXXRaMeFJqg1 MNfIBSnOW9zux9Gq0cXDr5JgS4doNwgKDQ== X-Google-Smtp-Source: AK7set+RTbGlV8ZwEoXt74Ck14VNSn+V285gIt0yZycbnm2bzqznZy0OUsX4CdVTWNDBkYyPNrSn+A== X-Received: by 2002:a17:903:1389:b0:199:3d76:bc22 with SMTP id jx9-20020a170903138900b001993d76bc22mr3492859plb.26.1675878767375; Wed, 08 Feb 2023 09:52:47 -0800 (PST) Received: from lrumancik.svl.corp.google.com ([2620:15c:2d4:203:726:5e6d:fcde:4245]) by smtp.gmail.com with ESMTPSA id y17-20020a170902d65100b00198e397994bsm10911452plh.136.2023.02.08.09.52.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Feb 2023 09:52:46 -0800 (PST) From: Leah Rumancik To: linux-xfs@vger.kernel.org Cc: amir73il@gmail.com, chandan.babu@oracle.com, Dave Chinner , "Darrick J . Wong" , Christoph Hellwig , Dave Chinner , Leah Rumancik Subject: [PATCH 5.15 CANDIDATE 07/10] xfs: don't assert fail on perag references on teardown Date: Wed, 8 Feb 2023 09:52:25 -0800 Message-Id: <20230208175228.2226263-8-leah.rumancik@gmail.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog In-Reply-To: <20230208175228.2226263-1-leah.rumancik@gmail.com> References: <20230208175228.2226263-1-leah.rumancik@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org From: Dave Chinner [ Upstream commit 5b55cbc2d72632e874e50d2e36bce608e55aaaea ] Not fatal, the assert is there to catch developer attention. I'm seeing this occasionally during recoveryloop testing after a shutdown, and I don't want this to stop an overnight recoveryloop run as it is currently doing. Convert the ASSERT to a XFS_IS_CORRUPT() check so it will dump a corruption report into the log and cause a test failure that way, but it won't stop the machine dead. Signed-off-by: Dave Chinner Reviewed-by: Darrick J. Wong Reviewed-by: Christoph Hellwig Signed-off-by: Dave Chinner Signed-off-by: Leah Rumancik --- fs/xfs/libxfs/xfs_ag.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/fs/xfs/libxfs/xfs_ag.c b/fs/xfs/libxfs/xfs_ag.c index 005abfd9fd34..aff6fb5281f6 100644 --- a/fs/xfs/libxfs/xfs_ag.c +++ b/fs/xfs/libxfs/xfs_ag.c @@ -173,7 +173,6 @@ __xfs_free_perag( struct xfs_perag *pag = container_of(head, struct xfs_perag, rcu_head); ASSERT(!delayed_work_pending(&pag->pag_blockgc_work)); - ASSERT(atomic_read(&pag->pag_ref) == 0); kmem_free(pag); } @@ -192,7 +191,7 @@ xfs_free_perag( pag = radix_tree_delete(&mp->m_perag_tree, agno); spin_unlock(&mp->m_perag_lock); ASSERT(pag); - ASSERT(atomic_read(&pag->pag_ref) == 0); + XFS_IS_CORRUPT(pag->pag_mount, atomic_read(&pag->pag_ref) != 0); cancel_delayed_work_sync(&pag->pag_blockgc_work); xfs_iunlink_destroy(pag); From patchwork Wed Feb 8 17:52:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leah Rumancik X-Patchwork-Id: 13133494 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C4D9FC05027 for ; Wed, 8 Feb 2023 17:52:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229718AbjBHRwv (ORCPT ); Wed, 8 Feb 2023 12:52:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60120 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229509AbjBHRwu (ORCPT ); Wed, 8 Feb 2023 12:52:50 -0500 Received: from mail-pj1-x1031.google.com (mail-pj1-x1031.google.com [IPv6:2607:f8b0:4864:20::1031]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5A649768D for ; Wed, 8 Feb 2023 09:52:49 -0800 (PST) Received: by mail-pj1-x1031.google.com with SMTP id m2-20020a17090a414200b00231173c006fso2558188pjg.5 for ; Wed, 08 Feb 2023 09:52:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=PpMjG83bOPPqiYL947tygxhoRk3zLkQeyfVWRhdXhpY=; b=DDFm6uo3UZ2hou+n1alAXJlQAz+ZFThM1Gv93Nd0P7VOASN0lVNsvFTbTxiNOcxb6R 4bAD0uLAkKFoVyr/rLtC/GRcEc/tNL1uqeI+ts4WrRzNf0w5dzJ156SlzlizLxW1Y2Zw aBgslDMbl77NODrOIBHK52uJk6XZwv//nmnnaNv71i4kvbgTR8Feo8zhtdCEwbAryg0K ecbTKpzfFrLXMToWOstzcOGyscgHz2YUa14Ii5WpWNBjrDkoL6Z1R0UfSXY+h4s2AxXr doEwgVKJZ6ufmrJTcTJKvn8LI1VARLdvyU0qDLY6DvQwSBgtAX51icAee3GTw4mN32J0 zYRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=PpMjG83bOPPqiYL947tygxhoRk3zLkQeyfVWRhdXhpY=; b=iohbnzDtZ9HOnpUA1VoBwr/mwEx5L8Nzech/pqK9KnwQc24qzrH0WUuDPJ7YwmcyLn bKlGaKrdlxnFih9WO4H9D2OYPj/CDhGSdkdgSvHzlEbz0IeeBH1tl9i4Hqp9fSVMfmC+ +n5fYXXBlBzXJjwZA8MkfGoWN/NvZtmQ4ltpP3c8biMdBmNRKOFtMGeUZMkDbrdDeHIi BQFMgHBwZulqm6B7MRSDubsLyQSPu+V5dzpgvzFYxGBYYSFm0bNCIpwgKNYgC2GU6LKp ZNAenk0jPUYisr0eGJ7sfP+uEo8PtVmgtdnOW5oIaic1ECa+VkdvCVsFBhBqFINVaVZW l1iA== X-Gm-Message-State: AO0yUKUsv6lcruungaNeE7cnht4YsCUuhBprjtJCwqDaeEHxtl/isJfW RBmG/nYbo0Z+gDzJPQ+9aCVlaoR+X4ZoJw== X-Google-Smtp-Source: AK7set+SRAEs8DqZKK7nJcVpZzC9umonIRrs3vHPj9Vd0Z4NYbrNyMOlw8wJifSZJak5F6c0yzIdjA== X-Received: by 2002:a17:902:ce83:b0:196:11b1:101d with SMTP id f3-20020a170902ce8300b0019611b1101dmr8970855plg.28.1675878768617; Wed, 08 Feb 2023 09:52:48 -0800 (PST) Received: from lrumancik.svl.corp.google.com ([2620:15c:2d4:203:726:5e6d:fcde:4245]) by smtp.gmail.com with ESMTPSA id y17-20020a170902d65100b00198e397994bsm10911452plh.136.2023.02.08.09.52.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Feb 2023 09:52:48 -0800 (PST) From: Leah Rumancik To: linux-xfs@vger.kernel.org Cc: amir73il@gmail.com, chandan.babu@oracle.com, Dave Chinner , "Darrick J . Wong" , Christoph Hellwig , Dave Chinner , Leah Rumancik Subject: [PATCH 5.15 CANDIDATE 08/10] xfs: assert in xfs_btree_del_cursor should take into account error Date: Wed, 8 Feb 2023 09:52:26 -0800 Message-Id: <20230208175228.2226263-9-leah.rumancik@gmail.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog In-Reply-To: <20230208175228.2226263-1-leah.rumancik@gmail.com> References: <20230208175228.2226263-1-leah.rumancik@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org From: Dave Chinner [ Upstream commit 56486f307100e8fc66efa2ebd8a71941fa10bf6f ] xfs/538 on a 1kB block filesystem failed with this assert: XFS: Assertion failed: cur->bc_btnum != XFS_BTNUM_BMAP || cur->bc_ino.allocated == 0 || xfs_is_shutdown(cur->bc_mp), file: fs/xfs/libxfs/xfs_btree.c, line: 448 The problem was that an allocation failed unexpectedly in xfs_bmbt_alloc_block() after roughly 150,000 minlen allocation error injections, resulting in an EFSCORRUPTED error being returned to xfs_bmapi_write(). The error occurred on extent-to-btree format conversion allocating the new root block: RIP: 0010:xfs_bmbt_alloc_block+0x177/0x210 Call Trace: xfs_btree_new_iroot+0xdf/0x520 xfs_btree_make_block_unfull+0x10d/0x1c0 xfs_btree_insrec+0x364/0x790 xfs_btree_insert+0xaa/0x210 xfs_bmap_add_extent_hole_real+0x1fe/0x9a0 xfs_bmapi_allocate+0x34c/0x420 xfs_bmapi_write+0x53c/0x9c0 xfs_alloc_file_space+0xee/0x320 xfs_file_fallocate+0x36b/0x450 vfs_fallocate+0x148/0x340 __x64_sys_fallocate+0x3c/0x70 do_syscall_64+0x35/0x80 entry_SYSCALL_64_after_hwframe+0x44/0xa Why the allocation failed at this point is unknown, but is likely that we ran the transaction out of reserved space and filesystem out of space with bmbt blocks because of all the minlen allocations being done causing worst case fragmentation of a large allocation. Regardless of the cause, we've then called xfs_bmapi_finish() which calls xfs_btree_del_cursor(cur, error) to tear down the cursor. So we have a failed operation, error != 0, cur->bc_ino.allocated > 0 and the filesystem is still up. The assert fails to take into account that allocation can fail with an error and the transaction teardown will shut the filesystem down if necessary. i.e. the assert needs to check "|| error != 0" as well, because at this point shutdown is pending because the current transaction is dirty.... Signed-off-by: Dave Chinner Reviewed-by: Darrick J. Wong Reviewed-by: Christoph Hellwig Signed-off-by: Dave Chinner Signed-off-by: Leah Rumancik --- fs/xfs/libxfs/xfs_btree.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/fs/xfs/libxfs/xfs_btree.c b/fs/xfs/libxfs/xfs_btree.c index b4b5bf4bfed7..482a4ccc6568 100644 --- a/fs/xfs/libxfs/xfs_btree.c +++ b/fs/xfs/libxfs/xfs_btree.c @@ -445,8 +445,14 @@ xfs_btree_del_cursor( break; } + /* + * If we are doing a BMBT update, the number of unaccounted blocks + * allocated during this cursor life time should be zero. If it's not + * zero, then we should be shut down or on our way to shutdown due to + * cancelling a dirty transaction on error. + */ ASSERT(cur->bc_btnum != XFS_BTNUM_BMAP || cur->bc_ino.allocated == 0 || - xfs_is_shutdown(cur->bc_mp)); + xfs_is_shutdown(cur->bc_mp) || error != 0); if (unlikely(cur->bc_flags & XFS_BTREE_STAGING)) kmem_free(cur->bc_ops); if (!(cur->bc_flags & XFS_BTREE_LONG_PTRS) && cur->bc_ag.pag) From patchwork Wed Feb 8 17:52:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leah Rumancik X-Patchwork-Id: 13133495 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84907C636CC for ; Wed, 8 Feb 2023 17:52:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230207AbjBHRww (ORCPT ); Wed, 8 Feb 2023 12:52:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60138 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229509AbjBHRwv (ORCPT ); Wed, 8 Feb 2023 12:52:51 -0500 Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com [IPv6:2607:f8b0:4864:20::1033]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B0D81768D for ; Wed, 8 Feb 2023 09:52:50 -0800 (PST) Received: by mail-pj1-x1033.google.com with SMTP id mi9so19128402pjb.4 for ; Wed, 08 Feb 2023 09:52:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=rARtIHdQe5IIw3Wd0wifRnJtnfPDfUqMH4bk9/jyx9Q=; b=A5dkN+qlx4Ljo0KCbPHMyfg7Em8bph+LjB/V0H+B7+T+bFN/AmQ34RasDCdWjcS92B HEQvp3HB2oMfdSr+6t2UGp38L/w7ouPGO7eKDm80j7z+fjanj4bUkZssLVlOFM04k/t1 SBNuDXXyxOEa9E8KNAxxPWOZL5avwfqDZ1m78QgeqRqNC7WqTsKT02ajSsPXl9qhqODP kcaGv4kKU7qLKyo4QuFXZDQBkjj8zMlm1BNUGwA+QGfm1FR8+hdRd8rVwBVTVy6WXZvK dj3aUituKvQjKuxNEbNhFfGgKcY8r2SPI1tTBrot8PWKQnMtat2MvbIATKHDCOQWnxFg l3EA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rARtIHdQe5IIw3Wd0wifRnJtnfPDfUqMH4bk9/jyx9Q=; b=ccVI/3IyaA4PDl/Q3CG+8QUEJh0QQCLVebST5jT/cU8hcw/5qddUo2iOf9k6j+GOdu 2UzagyWvbIHHzokKkzsYrv7Ctd13sozxzl41lm3OYoHnkpvlBEOfLQUbztukSkX2Gzm+ 6T7SVrpzenjxIqoyRjNwb3nEkNNR1iwQWaP5h0tZRx9IPAounQnWlG8JIvXlP2KFhZo9 RaCW7qDCwO+/nEFumAwuLwGEM5SYRXDMfmrOtDJj5RA8lT4qzxPu2ntNNlw+zwj6bcNk a0UGBoSUNch1FpnjMasd/xpfbm5bpz0KcpsfzQHgjaoe1NVgpWyTdVxNJESs6L77BQsc B/Gg== X-Gm-Message-State: AO0yUKXV3pm3vr/ZTPmAtavu+bgovvz9Ki2SSUboFdwNB/x3zfvfsJfj /UF5eMaszLWFSr8X6aiyfJK07oMvSMkLPg== X-Google-Smtp-Source: AK7set9tqx9CPvisVULrw7JSKsJ2vauGYZoHbRF+dXA0UxX0C6hjKppZ99mPIXJStsoP14YD67S4Bw== X-Received: by 2002:a05:6a21:6d9f:b0:bc:bf86:660 with SMTP id wl31-20020a056a216d9f00b000bcbf860660mr10023073pzb.45.1675878769870; Wed, 08 Feb 2023 09:52:49 -0800 (PST) Received: from lrumancik.svl.corp.google.com ([2620:15c:2d4:203:726:5e6d:fcde:4245]) by smtp.gmail.com with ESMTPSA id y17-20020a170902d65100b00198e397994bsm10911452plh.136.2023.02.08.09.52.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Feb 2023 09:52:49 -0800 (PST) From: Leah Rumancik To: linux-xfs@vger.kernel.org Cc: amir73il@gmail.com, chandan.babu@oracle.com, "Darrick J. Wong" , Christoph Hellwig , Dave Chinner , Dave Chinner , Leah Rumancik Subject: [PATCH 5.15 CANDIDATE 09/10] xfs: purge dquots after inode walk fails during quotacheck Date: Wed, 8 Feb 2023 09:52:27 -0800 Message-Id: <20230208175228.2226263-10-leah.rumancik@gmail.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog In-Reply-To: <20230208175228.2226263-1-leah.rumancik@gmail.com> References: <20230208175228.2226263-1-leah.rumancik@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org From: "Darrick J. Wong" [ Upstream commit 86d40f1e49e9a909d25c35ba01bea80dbcd758cb ] xfs/434 and xfs/436 have been reporting occasional memory leaks of xfs_dquot objects. These tests themselves were the messenger, not the culprit, since they unload the xfs module, which trips the slub debugging code while tearing down all the xfs slab caches: ============================================================================= BUG xfs_dquot (Tainted: G W ): Objects remaining in xfs_dquot on __kmem_cache_shutdown() ----------------------------------------------------------------------------- Slab 0xffffea000606de00 objects=30 used=5 fp=0xffff888181b78a78 flags=0x17ff80000010200(slab|head|node=0|zone=2|lastcpupid=0xfff) CPU: 0 PID: 3953166 Comm: modprobe Tainted: G W 5.18.0-rc6-djwx #rc6 d5824be9e46a2393677bda868f9b154d917ca6a7 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS ?-20171121_152543-x86-ol7-builder-01.us.oracle.com-4.el7.1 04/01/2014 Since we don't generally rmmod the xfs module between fstests, this means that xfs/434 is really just the canary in the coal mine -- something leaked a dquot, but we don't know who. After days of pounding on fstests with kmemleak enabled, I finally got it to spit this out: unreferenced object 0xffff8880465654c0 (size 536): comm "u10:4", pid 88, jiffies 4294935810 (age 29.512s) hex dump (first 32 bytes): 60 4a 56 46 80 88 ff ff 58 ea e4 5c 80 88 ff ff `JVF....X..\.... 00 e0 52 49 80 88 ff ff 01 00 01 00 00 00 00 00 ..RI............ backtrace: [] xfs_dquot_alloc+0x2c/0x530 [xfs] [] xfs_qm_dqread+0x6f/0x330 [xfs] [] xfs_qm_dqget+0x132/0x4e0 [xfs] [] xfs_qm_quotacheck_dqadjust+0xa0/0x3e0 [xfs] [] xfs_qm_dqusage_adjust+0x35d/0x4f0 [xfs] [] xfs_iwalk_ag_recs+0x348/0x5d0 [xfs] [] xfs_iwalk_run_callbacks+0x273/0x540 [xfs] [] xfs_iwalk_ag+0x5ed/0x890 [xfs] [] xfs_iwalk_ag_work+0xff/0x170 [xfs] [] xfs_pwork_work+0x79/0x130 [xfs] [] process_one_work+0x672/0x1040 [] worker_thread+0x59b/0xec0 [] kthread+0x29e/0x340 [] ret_from_fork+0x1f/0x30 Now we know that quotacheck is at fault, but even this report was canaryish -- it was triggered by xfs/494, which doesn't actually mount any filesystems. (kmemleak can be a little slow to notice leaks, even with fstests repeatedly whacking it to look for them.) Looking at the *previous* fstest, however, showed that the test run before xfs/494 was xfs/117. The tipoff to the problem is in this excerpt from dmesg: XFS (sda4): Quotacheck needed: Please wait. XFS (sda4): Metadata corruption detected at xfs_dinode_verify.part.0+0xdb/0x7b0 [xfs], inode 0x119 dinode XFS (sda4): Unmount and run xfs_repair XFS (sda4): First 128 bytes of corrupted metadata buffer: 00000000: 49 4e 81 a4 03 02 00 00 00 00 00 00 00 00 00 00 IN.............. 00000010: 00 00 00 01 00 00 00 00 00 90 57 54 54 1a 4c 68 ..........WTT.Lh 00000020: 81 f9 7d e1 6d ee 16 00 34 bd 7d e1 6d ee 16 00 ..}.m...4.}.m... 00000030: 34 bd 7d e1 6d ee 16 00 00 00 00 00 00 00 00 00 4.}.m........... 00000040: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00000050: 00 00 00 02 00 00 00 00 00 00 00 00 96 80 f3 ab ................ 00000060: ff ff ff ff da 57 7b 11 00 00 00 00 00 00 00 03 .....W{......... 00000070: 00 00 00 01 00 00 00 10 00 00 00 00 00 00 00 08 ................ XFS (sda4): Quotacheck: Unsuccessful (Error -117): Disabling quotas. The dinode verifier decided that the inode was corrupt, which causes iget to return with EFSCORRUPTED. Since this happened during quotacheck, it is obvious that the kernel aborted the inode walk on account of the corruption error and disabled quotas. Unfortunately, we neglect to purge the dquot cache before doing that, which is how the dquots leaked. The problems started 10 years ago in commit b84a3a, when the dquot lists were converted to a radix tree, but the error handling behavior was not correctly preserved -- in that commit, if the bulkstat failed and usrquota was enabled, the bulkstat failure code would be overwritten by the result of flushing all the dquots to disk. As long as that succeeds, we'd continue the quota mount as if everything were ok, but instead we're now operating with a corrupt inode and incorrect quota usage counts. I didn't notice this bug in 2019 when I wrote commit ebd126a, which changed quotacheck to skip the dqflush when the scan doesn't complete due to inode walk failures. Introduced-by: b84a3a96751f ("xfs: remove the per-filesystem list of dquots") Fixes: ebd126a651f8 ("xfs: convert quotacheck to use the new iwalk functions") Signed-off-by: Darrick J. Wong Reviewed-by: Christoph Hellwig Reviewed-by: Dave Chinner Signed-off-by: Dave Chinner Signed-off-by: Leah Rumancik --- fs/xfs/xfs_qm.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/fs/xfs/xfs_qm.c b/fs/xfs/xfs_qm.c index 5608066d6e53..623244650a2f 100644 --- a/fs/xfs/xfs_qm.c +++ b/fs/xfs/xfs_qm.c @@ -1317,8 +1317,15 @@ xfs_qm_quotacheck( error = xfs_iwalk_threaded(mp, 0, 0, xfs_qm_dqusage_adjust, 0, true, NULL); - if (error) + if (error) { + /* + * The inode walk may have partially populated the dquot + * caches. We must purge them before disabling quota and + * tearing down the quotainfo, or else the dquots will leak. + */ + xfs_qm_dqpurge_all(mp); goto error_return; + } /* * We've made all the changes that we need to make incore. Flush them From patchwork Wed Feb 8 17:52:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leah Rumancik X-Patchwork-Id: 13133496 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1BBF4C636D3 for ; Wed, 8 Feb 2023 17:52:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229509AbjBHRwx (ORCPT ); Wed, 8 Feb 2023 12:52:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60146 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229663AbjBHRww (ORCPT ); Wed, 8 Feb 2023 12:52:52 -0500 Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com [IPv6:2607:f8b0:4864:20::102d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8EA5C5278 for ; Wed, 8 Feb 2023 09:52:51 -0800 (PST) Received: by mail-pj1-x102d.google.com with SMTP id bx22so16242146pjb.3 for ; Wed, 08 Feb 2023 09:52:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SorP1JhlIcCEY/LuSUdYfh3iN08Qb7cjYiI47fg8YOw=; b=VPQO49Ed0XX+A7bcTOgjfBEF7PteVpjkaKCtOlusdZdSBk+R+5Jn5qUYS8TsRmlT5H mTdYCQ/KP2MEU0fuQRJBwH9YPHSXTY06WQuXMAxnhxvboRhPVMjwrqnK2HdVYrYEPNX7 +IDjdmZQZgqYIQsrZr6KhkDmc4HD5WmK/7kauHhUbDpUh8cJLc8AF1J7qzs3Cr2w4yrb v7uxI5FJ/C+e1GxjTd6oRxNZ93YJNPGIRghQ3Ry04mhrJrokWqglRO1ZPWEqIrrFY//Q +htNccxyfMR1SxROhj/9AmWdDP0qISYfO+Qh/p5xAwHRugwWXZ4wTqFcgerx02Zsy8jD MaEg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SorP1JhlIcCEY/LuSUdYfh3iN08Qb7cjYiI47fg8YOw=; b=WWtTneTM0+aNrMXdPFXSZOFHZ8b6kHriYc0wSca7R4ebn1fLyqehj0DwbhnWsZyFco nZGflyo+k1vL+IhGXlM0VUFm2E5wZE7Q9YYKaamFdQjMmveKu4yQxrCDunFMiZBLjvwV ihIm4zLD8gCCBlQXwrMY3jAriIrI6+pZcjJpLibUbjsVRn4r7buQedlyr+OFI5NoblGc 0fVVMQt87aptZmrypVl3oaeYJ846x9f+5ud07jWOZpca0/a4seE3igNSokKGcsvDbBap kBwZw6Ex0MWTPdoca7lMSKwabdqlN+pGzXiB0Jz4t+ve+b9jlVNGS0UlHd82zKEjqAtJ VLcQ== X-Gm-Message-State: AO0yUKWVgDbbwDLImhJyp9peLsxDLISecKhIeldR9FzMFZexqGMpjb4p NuEhznC2MakXtWqwvuNNZUwwz5tIL+oL5w== X-Google-Smtp-Source: AK7set9FrqWqayJkx8EprojrPF9LFkf0U5CfFGn+fWjDJ8WEKL+VG28GHJbHsmDGYuW0I0s75F1wtg== X-Received: by 2002:a17:902:da8b:b0:199:bd4:9fbb with SMTP id j11-20020a170902da8b00b001990bd49fbbmr9599157plx.43.1675878770995; Wed, 08 Feb 2023 09:52:50 -0800 (PST) Received: from lrumancik.svl.corp.google.com ([2620:15c:2d4:203:726:5e6d:fcde:4245]) by smtp.gmail.com with ESMTPSA id y17-20020a170902d65100b00198e397994bsm10911452plh.136.2023.02.08.09.52.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Feb 2023 09:52:50 -0800 (PST) From: Leah Rumancik To: linux-xfs@vger.kernel.org Cc: amir73il@gmail.com, chandan.babu@oracle.com, "Darrick J. Wong" , Christoph Hellwig , Dave Chinner , Dave Chinner , Leah Rumancik Subject: [PATCH 5.15 CANDIDATE 10/10] xfs: don't leak btree cursor when insrec fails after a split Date: Wed, 8 Feb 2023 09:52:28 -0800 Message-Id: <20230208175228.2226263-11-leah.rumancik@gmail.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog In-Reply-To: <20230208175228.2226263-1-leah.rumancik@gmail.com> References: <20230208175228.2226263-1-leah.rumancik@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org From: "Darrick J. Wong" [ Upstream commit a54f78def73d847cb060b18c4e4a3d1d26c9ca6d ] The recent patch to improve btree cycle checking caused a regression when I rebased the in-memory btree branch atop the 5.19 for-next branch, because in-memory short-pointer btrees do not have AG numbers. This produced the following complaint from kmemleak: unreferenced object 0xffff88803d47dde8 (size 264): comm "xfs_io", pid 4889, jiffies 4294906764 (age 24.072s) hex dump (first 32 bytes): 90 4d 0b 0f 80 88 ff ff 00 a0 bd 05 80 88 ff ff .M.............. e0 44 3a a0 ff ff ff ff 00 df 08 06 80 88 ff ff .D:............. backtrace: [] xfbtree_dup_cursor+0x49/0xc0 [xfs] [] xfs_btree_dup_cursor+0x3b/0x200 [xfs] [] __xfs_btree_split+0x6ad/0x820 [xfs] [] xfs_btree_split+0x60/0x110 [xfs] [] xfs_btree_make_block_unfull+0x19a/0x1f0 [xfs] [] xfs_btree_insrec+0x3aa/0x810 [xfs] [] xfs_btree_insert+0xb3/0x240 [xfs] [] xfs_rmap_insert+0x99/0x200 [xfs] [] xfs_rmap_map_shared+0x192/0x5f0 [xfs] [] xfs_rmap_map_raw+0x6b/0x90 [xfs] [] xrep_rmap_stash+0xd5/0x1d0 [xfs] [] xrep_rmap_visit_bmbt+0xa0/0xf0 [xfs] [] xrep_rmap_scan_iext+0x56/0xa0 [xfs] [] xrep_rmap_scan_ifork+0xd8/0x160 [xfs] [] xrep_rmap_scan_inode+0x35/0x80 [xfs] [] xrep_rmap_find_rmaps+0x10e/0x270 [xfs] I noticed that xfs_btree_insrec has a bunch of debug code that return out of the function immediately, without freeing the "new" btree cursor that can be returned when _make_block_unfull calls xfs_btree_split. Fix the error return in this function to free the btree cursor. Signed-off-by: Darrick J. Wong Reviewed-by: Christoph Hellwig Reviewed-by: Dave Chinner Signed-off-by: Dave Chinner Signed-off-by: Leah Rumancik --- fs/xfs/libxfs/xfs_btree.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/fs/xfs/libxfs/xfs_btree.c b/fs/xfs/libxfs/xfs_btree.c index 482a4ccc6568..dffe4ca58493 100644 --- a/fs/xfs/libxfs/xfs_btree.c +++ b/fs/xfs/libxfs/xfs_btree.c @@ -3266,7 +3266,7 @@ xfs_btree_insrec( struct xfs_btree_block *block; /* btree block */ struct xfs_buf *bp; /* buffer for block */ union xfs_btree_ptr nptr; /* new block ptr */ - struct xfs_btree_cur *ncur; /* new btree cursor */ + struct xfs_btree_cur *ncur = NULL; /* new btree cursor */ union xfs_btree_key nkey; /* new block key */ union xfs_btree_key *lkey; int optr; /* old key/record index */ @@ -3346,7 +3346,7 @@ xfs_btree_insrec( #ifdef DEBUG error = xfs_btree_check_block(cur, block, level, bp); if (error) - return error; + goto error0; #endif /* @@ -3366,7 +3366,7 @@ xfs_btree_insrec( for (i = numrecs - ptr; i >= 0; i--) { error = xfs_btree_debug_check_ptr(cur, pp, i, level); if (error) - return error; + goto error0; } xfs_btree_shift_keys(cur, kp, 1, numrecs - ptr + 1); @@ -3451,6 +3451,8 @@ xfs_btree_insrec( return 0; error0: + if (ncur) + xfs_btree_del_cursor(ncur, error); return error; }