From patchwork Wed Dec 18 15:46:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kemeng Shi X-Patchwork-Id: 13913113 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 79F9614E2CF; Wed, 18 Dec 2024 06:48:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734504493; cv=none; b=WMYbDAT9R201I1kdFO/NMkcYrmD7Rp2ktPVjjl5Y37WqBWpTbw+5qUwvmAizFWUl+32CqzHMlo0u2aZ4PNOC4AxExMinPQBFDMoxQ3Vlq/1+bDyAW2gYl7XyKL6xLwuSfwYxwBISKfQl6moor4n6BxZobkOHOk/BG1rAhZSq6Vk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734504493; c=relaxed/simple; bh=Dxd3wbrklUEwkrkxVvGEjNb/3baOIVwjw20y1hzf2E8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=DrwGEPUtr9FarVOUzWyKq73d2imfVZMgQNOA3BlVqsoFW9Hu7iv248HLYBGshsDVDtEgXlMYNUruj4HZSG0ByfP8Dnto47ktfjoANCTq/V2HDqdf5m6y21xE7TMudB3QKfFh3o45mPyykThv2Q8z1wiNeTMx/xBAXuxiwDG9Q3I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.216]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4YCkl74PVSz4f3lDq; Wed, 18 Dec 2024 14:47:47 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.112]) by mail.maildlp.com (Postfix) with ESMTP id 277C61A0E96; Wed, 18 Dec 2024 14:48:08 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.101.6]) by APP1 (Coremail) with SMTP id cCh0CgA33a4mcGJnxKU7Ew--.4083S3; Wed, 18 Dec 2024 14:48:08 +0800 (CST) From: Kemeng Shi To: akpm@linux-foundation.org, willy@infradead.org Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-nfs@vger.kernel.org Subject: [PATCH v4 1/5] Xarray: Do not return sibling entries from xas_find_marked() Date: Wed, 18 Dec 2024 23:46:09 +0800 Message-Id: <20241218154613.58754-2-shikemeng@huaweicloud.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20241218154613.58754-1-shikemeng@huaweicloud.com> References: <20241218154613.58754-1-shikemeng@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-nfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: cCh0CgA33a4mcGJnxKU7Ew--.4083S3 X-Coremail-Antispam: 1UD129KBjvJXoWxZFyfJw18Wr15Cw43Kr15Arb_yoWrCF45pF W8GFykKr4xtr4UArykAayUXFWru34UXFWayFWrG34av3Z8J3WjkF1j9FyjqF9rZrWDZF4f ZF40v3yUZ3WDC3DanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUPYb4IE77IF4wAFF20E14v26r4j6ryUM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M280x2IEY4vEnII2IxkI6r1a6r45M2 8IrcIa0xkI8VA2jI8067AKxVWUGwA2048vs2IY020Ec7CjxVAFwI0_JFI_Gr1l8cAvFVAK 0II2c7xJM28CjxkF64kEwVA0rcxSw2x7M28EF7xvwVC0I7IYx2IY67AKxVW7JVWDJwA2z4 x0Y4vE2Ix0cI8IcVCY1x0267AKxVWxJVW8Jr1l84ACjcxK6I8E87Iv67AKxVW0oVCq3wA2 z4x0Y4vEx4A2jsIEc7CjxVAFwI0_GcCE3s1le2I262IYc4CY6c8Ij28IcVAaY2xG8wAqx4 xG64xvF2IEw4CE5I8CrVC2j2WlYx0E2Ix0cI8IcVAFwI0_Jr0_Jr4lYx0Ex4A2jsIE14v2 6r1j6r4UMcvjeVCFs4IE7xkEbVWUJVW8JwACjcxG0xvY0x0EwIxGrwCY1x0262kKe7AKxV WUAVWUtwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E 14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_JF0_Jw1lIx kGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUJVWUCwCI42IY6xIIjxv20xvEc7CjxVAF wI0_Gr0_Cr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r1j6r 4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x07jfAwsU UUUU= X-CM-SenderInfo: 5vklyvpphqwq5kxd4v5lfo033gof0z/ Similar to issue fixed in commit cbc02854331ed ("XArray: Do not return sibling entries from xa_load()"), we may return sibling entries from xas_find_marked as following: Thread A: Thread B: xa_store_range(xa, entry, 6, 7, gfp); xa_set_mark(xa, 6, mark) XA_STATE(xas, xa, 6); xas_find_marked(&xas, 7, mark); offset = xas_find_chunk(xas, advance, mark); [offset is 6 which points to a valid entry] xa_store_range(xa, entry, 4, 7, gfp); entry = xa_entry(xa, node, 6); [entry is a sibling of 4] if (!xa_is_node(entry)) return entry; Skip sibling entry like xas_find() does to protect caller from seeing sibling entry from xas_find_marked() or caller may use sibling entry as a valid entry and crash the kernel. Besides, load_race() test is modified to catch mentioned issue and modified load_race() only passes after this fix is merged. Here is an example how this bug could be triggerred in theory in nfs which enables large folio in mapping: Let's take a look at involved racer: 1. How pages could be created and dirtied in nfs. write ksys_write vfs_write new_sync_write nfs_file_write generic_perform_write nfs_write_begin fgf_set_order __filemap_get_folio nfs_write_end nfs_update_folio nfs_writepage_setup nfs_mark_request_dirty filemap_dirty_folio __folio_mark_dirty __xa_set_mark 2. How dirty pages could be deleted in nfs. ioctl do_vfs_ioctl file_ioctl ioctl_preallocate vfs_fallocate nfs42_fallocate nfs42_proc_deallocate truncate_pagecache_range truncate_inode_pages_range truncate_inode_folio filemap_remove_folio page_cache_delete xas_store(&xas, NULL); 3. How dirty pages could be lockless searched sync_file_range ksys_sync_file_range __filemap_fdatawrite_range filemap_fdatawrite_wbc do_writepages writeback_use_writepage writeback_iter writeback_get_folio filemap_get_folios_tag find_get_entry folio = xas_find_marked() folio_try_get(folio) In theory, kernel will crash as following: 1.Create 2.Search 3.Delete /* write page 2,3 */ write ... nfs_write_begin fgf_set_order __filemap_get_folio ... /* index = 2, order = 1 */ xa_store(&xas, folio) nfs_write_end ... __folio_mark_dirty /* sync page 2 and page 3 */ sync_file_range ... find_get_entry folio = xas_find_marked() /* offset will be 2 */ offset = xas_find_chunk() /* delete page 2 and page 3 */ ioctl ... xas_store(&xas, NULL); /* write page 0-3 */ write ... nfs_write_begin fgf_set_order __filemap_get_folio ... /* index = 0, order = 2 */ xa_store(&xas, folio) nfs_write_end ... __folio_mark_dirty /* get sibling entry from offset 2 */ entry = xa_entry(.., 2) /* use sibling entry as folio and crash kernel */ folio_try_get(folio) Signed-off-by: Kemeng Shi --- lib/xarray.c | 2 ++ tools/testing/radix-tree/multiorder.c | 4 ++++ 2 files changed, 6 insertions(+) diff --git a/lib/xarray.c b/lib/xarray.c index 32d4bac8c94c..fa87949719a0 100644 --- a/lib/xarray.c +++ b/lib/xarray.c @@ -1382,6 +1382,8 @@ void *xas_find_marked(struct xa_state *xas, unsigned long max, xa_mark_t mark) entry = xa_entry(xas->xa, xas->xa_node, xas->xa_offset); if (!entry && !(xa_track_free(xas->xa) && mark == XA_FREE_MARK)) continue; + if (xa_is_sibling(entry)) + continue; if (!xa_is_node(entry)) return entry; xas->xa_node = xa_to_node(entry); diff --git a/tools/testing/radix-tree/multiorder.c b/tools/testing/radix-tree/multiorder.c index cffaf2245d4f..eaff1b036989 100644 --- a/tools/testing/radix-tree/multiorder.c +++ b/tools/testing/radix-tree/multiorder.c @@ -227,6 +227,7 @@ static void *load_creator(void *ptr) unsigned long index = (3 << RADIX_TREE_MAP_SHIFT) - (1 << order); item_insert_order(tree, index, order); + xa_set_mark(tree, index, XA_MARK_1); item_delete_rcu(tree, index); } } @@ -242,8 +243,11 @@ static void *load_worker(void *ptr) rcu_register_thread(); while (!stop_iteration) { + unsigned long find_index = (2 << RADIX_TREE_MAP_SHIFT) + 1; struct item *item = xa_load(ptr, index); assert(!xa_is_internal(item)); + item = xa_find(ptr, &find_index, index, XA_MARK_1); + assert(!xa_is_internal(item)); } rcu_unregister_thread(); From patchwork Wed Dec 18 15:46:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kemeng Shi X-Patchwork-Id: 13913116 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7A2CB15380A; Wed, 18 Dec 2024 06:48:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734504494; cv=none; b=DxKjngc4JjTVfv2rf5Jk2Ckb9XogHIf/Zjt0CIjte+eNxyXagZdcxDemHjcPT7WYAxJnJH7KKrkHE76yXKR/63rnxV0qSd7IFqUyhNscR5zQMYOq1qe5dp/MTziI9xbsqFAhc9GVmYiKtzGm4dM7z30Y7P+4M6xUYvZ+uf9PJ7w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734504494; c=relaxed/simple; bh=gZg9AaSBKuhbiYiSdlwcny91/4iuqVBuhZMsgswIRDw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=TLtCqndEGprKB9h2E4xXUyaXx8gNr2Cj8exmHAYsjGeHTAJze9neVy5gNKOLaO3TRVwZOYl/iyREyP4iERoJg31E2YKLDHUS3fko3S26cHpkJZrAi/YcenZTMc+wjGymDZyxfz8pex6x5Sr2t+cRlG0o7AfswsPfCr8+t1afu6E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.216]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4YCkl76bcRz4f3lVs; Wed, 18 Dec 2024 14:47:47 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.112]) by mail.maildlp.com (Postfix) with ESMTP id 73DED1A0194; Wed, 18 Dec 2024 14:48:08 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.101.6]) by APP1 (Coremail) with SMTP id cCh0CgA33a4mcGJnxKU7Ew--.4083S4; Wed, 18 Dec 2024 14:48:08 +0800 (CST) From: Kemeng Shi To: akpm@linux-foundation.org, willy@infradead.org Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-nfs@vger.kernel.org Subject: [PATCH v4 2/5] Xarray: move forward index correctly in xas_pause() Date: Wed, 18 Dec 2024 23:46:10 +0800 Message-Id: <20241218154613.58754-3-shikemeng@huaweicloud.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20241218154613.58754-1-shikemeng@huaweicloud.com> References: <20241218154613.58754-1-shikemeng@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-nfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: cCh0CgA33a4mcGJnxKU7Ew--.4083S4 X-Coremail-Antispam: 1UD129KBjvJXoWxuFyUWFWxCF4fJF4DAw43Jrb_yoW5Xr1DpF WDKryFyw18Jr4I9rnIqa109w1Fg3WrKa1aqrWfGr1ftrZxGryqy3WjkrZ8ZF9xGa18Za47 CF48Wr1DGa1DJaDanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUPYb4IE77IF4wAFF20E14v26ryj6rWUM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M280x2IEY4vEnII2IxkI6r1a6r45M2 8IrcIa0xkI8VA2jI8067AKxVWUXwA2048vs2IY020Ec7CjxVAFwI0_Gr0_Xr1l8cAvFVAK 0II2c7xJM28CjxkF64kEwVA0rcxSw2x7M28EF7xvwVC0I7IYx2IY67AKxVW7JVWDJwA2z4 x0Y4vE2Ix0cI8IcVCY1x0267AKxVWxJVW8Jr1l84ACjcxK6I8E87Iv67AKxVW0oVCq3wA2 z4x0Y4vEx4A2jsIEc7CjxVAFwI0_GcCE3s1le2I262IYc4CY6c8Ij28IcVAaY2xG8wAqx4 xG64xvF2IEw4CE5I8CrVC2j2WlYx0E2Ix0cI8IcVAFwI0_Jr0_Jr4lYx0Ex4A2jsIE14v2 6r1j6r4UMcvjeVCFs4IE7xkEbVWUJVW8JwACjcxG0xvY0x0EwIxGrwCY1x0262kKe7AKxV WUAVWUtwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E 14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_JF0_Jw1lIx kGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUJVWUCwCI42IY6xIIjxv20xvEc7CjxVAF wI0_Gr0_Cr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r1j6r 4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x07j9739U UUUU= X-CM-SenderInfo: 5vklyvpphqwq5kxd4v5lfo033gof0z/ After xas_load(), xas->index could point to mid of found multi-index entry and xas->index's bits under node->shift maybe non-zero. The afterward xas_pause() will move forward xas->index with xa->node->shift with bits under node->shift un-masked and thus skip some index unexpectedly. Consider following case: Assume XA_CHUNK_SHIFT is 4. xa_store_range(xa, 16, 31, ...) xa_store(xa, 32, ...) XA_STATE(xas, xa, 17); xas_for_each(&xas,...) xas_load(&xas) /* xas->index = 17, xas->xa_offset = 1, xas->xa_node->xa_shift = 4 */ xas_pause() /* xas->index = 33, xas->xa_offset = 2, xas->xa_node->xa_shift = 4 */ As we can see, index of 32 is skipped unexpectedly. Fix this by mask bit under node->xa_shift when move forward index in xas_pause(). For now, this will not cause serious problems. Only minor problem like cachestat return less number of page status could happen. Signed-off-by: Kemeng Shi --- lib/test_xarray.c | 35 +++++++++++++++++++++++++++++++++++ lib/xarray.c | 1 + 2 files changed, 36 insertions(+) diff --git a/lib/test_xarray.c b/lib/test_xarray.c index d5c5cbba33ed..6932a26f4927 100644 --- a/lib/test_xarray.c +++ b/lib/test_xarray.c @@ -1448,6 +1448,41 @@ static noinline void check_pause(struct xarray *xa) XA_BUG_ON(xa, count != order_limit); xa_destroy(xa); + + index = 0; + for (order = XA_CHUNK_SHIFT; order > 0; order--) { + XA_BUG_ON(xa, xa_store_order(xa, index, order, + xa_mk_index(index), GFP_KERNEL)); + index += 1UL << order; + } + + index = 0; + count = 0; + xas_set(&xas, 0); + rcu_read_lock(); + xas_for_each(&xas, entry, ULONG_MAX) { + XA_BUG_ON(xa, entry != xa_mk_index(index)); + index += 1UL << (XA_CHUNK_SHIFT - count); + count++; + } + rcu_read_unlock(); + XA_BUG_ON(xa, count != XA_CHUNK_SHIFT); + + index = 0; + count = 0; + xas_set(&xas, XA_CHUNK_SIZE / 2 + 1); + rcu_read_lock(); + xas_for_each(&xas, entry, ULONG_MAX) { + XA_BUG_ON(xa, entry != xa_mk_index(index)); + index += 1UL << (XA_CHUNK_SHIFT - count); + count++; + xas_pause(&xas); + } + rcu_read_unlock(); + XA_BUG_ON(xa, count != XA_CHUNK_SHIFT); + + xa_destroy(xa); + } static noinline void check_move_tiny(struct xarray *xa) diff --git a/lib/xarray.c b/lib/xarray.c index fa87949719a0..d0732c5b8403 100644 --- a/lib/xarray.c +++ b/lib/xarray.c @@ -1147,6 +1147,7 @@ void xas_pause(struct xa_state *xas) if (!xa_is_sibling(xa_entry(xas->xa, node, offset))) break; } + xas->xa_index &= ~0UL << node->shift; xas->xa_index += (offset - xas->xa_offset) << node->shift; if (xas->xa_index == 0) xas->xa_node = XAS_BOUNDS; From patchwork Wed Dec 18 15:46:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kemeng Shi X-Patchwork-Id: 13913117 Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6ADE3154BE2; Wed, 18 Dec 2024 06:48:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734504495; cv=none; b=VpAc6M0fSU8D6qMlykZJ4P6f9rSD8VMS/PeLifWoqjtyjliksdm+AJbdvc0343/X6ccyTyBUGHKTbiAd01yTHVfM3hRVneSJX/LolGu+R8sQz32zzXPa0LGg727O28aOh2BoR9RSoj9IfAsmd79x+DSrZGKHWgAt75SLvRsS+fs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734504495; c=relaxed/simple; bh=zdZBNspTvRi8E6yLFupvWXoyriecBilryOejiKh9q3c=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=dTtagvwQ0/tJVEXTroEVhXb2O9L86dtNV/HUlfc2Etgm0t1txwhZnJ5rCm5mp4LKNMJBocvmTuFegrHb4sWtKgPcIZCEQ/C1QGbQsFh5q+jIZhX9eghvuP3+XbEKl6D7PhxYxHGTHld6xnXQL6zotT9GyPzKZVXCgLfPas+Eqjc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.216]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4YCkl92Jc2z4f3kFv; Wed, 18 Dec 2024 14:47:49 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.112]) by mail.maildlp.com (Postfix) with ESMTP id C8A791A0196; Wed, 18 Dec 2024 14:48:08 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.101.6]) by APP1 (Coremail) with SMTP id cCh0CgA33a4mcGJnxKU7Ew--.4083S5; Wed, 18 Dec 2024 14:48:08 +0800 (CST) From: Kemeng Shi To: akpm@linux-foundation.org, willy@infradead.org Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-nfs@vger.kernel.org Subject: [PATCH v4 3/5] Xarray: distinguish large entries correctly in xas_split_alloc() Date: Wed, 18 Dec 2024 23:46:11 +0800 Message-Id: <20241218154613.58754-4-shikemeng@huaweicloud.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20241218154613.58754-1-shikemeng@huaweicloud.com> References: <20241218154613.58754-1-shikemeng@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-nfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: cCh0CgA33a4mcGJnxKU7Ew--.4083S5 X-Coremail-Antispam: 1UD129KBjvdXoW7GF1rGr4kJF1DCr4DJw1fZwb_yoWDurg_ur 4FgF4kZr45GF4kXwnxCF4rt340kFn0kF9ruFWUJr9xW34UJrsrtr1kWrsrXa4rAFyjyFy7 AFs5Jry093yUKjkaLaAFLSUrUUUUjb8apTn2vfkv8UJUUUU8Yxn0WfASr-VFAUDa7-sFnT 9fnUUIcSsGvfJTRUUUbDAYFVCjjxCrM7AC8VAFwI0_Wr0E3s1l1xkIjI8I6I8E6xAIw20E Y4v20xvaj40_Wr0E3s1l1IIY67AEw4v_Jr0_Jr4l87I20VAvwVAaII0Ic2I_JFv_Gryl82 xGYIkIc2x26280x7IE14v26r1rM28IrcIa0xkI8VCY1x0267AKxVW8JVW5JwA2ocxC64kI II0Yj41l84x0c7CEw4AK67xGY2AK021l84ACjcxK6xIIjxv20xvE14v26F1j6w1UM28EF7 xvwVC0I7IYx2IY6xkF7I0E14v26r4UJVWxJr1l84ACjcxK6I8E87Iv67AKxVW0oVCq3wA2 z4x0Y4vEx4A2jsIEc7CjxVAFwI0_GcCE3s1le2I262IYc4CY6c8Ij28IcVAaY2xG8wAqx4 xG64xvF2IEw4CE5I8CrVC2j2WlYx0E2Ix0cI8IcVAFwI0_Jr0_Jr4lYx0Ex4A2jsIE14v2 6r1j6r4UMcvjeVCFs4IE7xkEbVWUJVW8JwACjcxG0xvY0x0EwIxGrwCY1x0262kKe7AKxV WUAVWUtwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E 14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_JF0_Jw1lIx kGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUJVWUCwCI42IY6xIIjxv20xvEc7CjxVAF wI0_Gr0_Cr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r1j6r 4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x07jvUDXU UUUU= X-CM-SenderInfo: 5vklyvpphqwq5kxd4v5lfo033gof0z/ We don't support large entries which expand two more level xa_node in split. For case "xas->xa_shift + 2 * XA_CHUNK_SHIFT == order", we also need two level of xa_node to expand. Distinguish entry as large entry in case "xas->xa_shift + 2 * XA_CHUNK_SHIFT == order". As max order of folio in pagecache (MAX_PAGECACHE_ORDER) is <= (XA_CHUNK_SHIFT * 2 - 1), this change is more likely a cleanup... Signed-off-by: Kemeng Shi --- lib/xarray.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/xarray.c b/lib/xarray.c index d0732c5b8403..3fac3f2cea9d 100644 --- a/lib/xarray.c +++ b/lib/xarray.c @@ -1022,7 +1022,7 @@ void xas_split_alloc(struct xa_state *xas, void *entry, unsigned int order, unsigned int mask = xas->xa_sibs; /* XXX: no support for splitting really large entries yet */ - if (WARN_ON(xas->xa_shift + 2 * XA_CHUNK_SHIFT < order)) + if (WARN_ON(xas->xa_shift + 2 * XA_CHUNK_SHIFT <= order)) goto nomem; if (xas->xa_shift + XA_CHUNK_SHIFT > order) return; From patchwork Wed Dec 18 15:46:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kemeng Shi X-Patchwork-Id: 13913114 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EFFE6154435; Wed, 18 Dec 2024 06:48:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734504494; cv=none; b=R3KQAIFGYSnNnob9pzr2+x/YlvkL/eUQVzINI2XT+o8bwhmNNtKPARLzNjYEiH26mJcEKgy+fvgOCluSt9ukNO3Bz6vsNcD/olmzQojardJ/gQdMprmj6GTq6T26MOLehsWGKKBRWSMIpuQ9Ty/m4xuhR7ZRAl7n+ZOiiCinN8s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734504494; c=relaxed/simple; bh=XtlA+Aq0rbUEYLB0kd7+wNs8u/7jCZVEWBvDzDhooDM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ima338SUijosbfSW6xUS6KDu7nYi3PPNSdF6ruyx6A//dEDlEjcUIYbzVtaWrmIfwpkKmfxot3oaNy1TAKvRD3MEimF8YNqnOzqPt3igDwnq6OL/DlDME8eX9zeEcExFQVO1DLOQ75cHtWlmQJQ0NfqemXFgxqkg/oL8FbMaeC4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.216]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4YCklG55tFz4f3jq4; Wed, 18 Dec 2024 14:47:54 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.112]) by mail.maildlp.com (Postfix) with ESMTP id 2A7011A0196; Wed, 18 Dec 2024 14:48:09 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.101.6]) by APP1 (Coremail) with SMTP id cCh0CgA33a4mcGJnxKU7Ew--.4083S6; Wed, 18 Dec 2024 14:48:08 +0800 (CST) From: Kemeng Shi To: akpm@linux-foundation.org, willy@infradead.org Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-nfs@vger.kernel.org Subject: [PATCH v4 4/5] Xarray: remove repeat check in xas_squash_marks() Date: Wed, 18 Dec 2024 23:46:12 +0800 Message-Id: <20241218154613.58754-5-shikemeng@huaweicloud.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20241218154613.58754-1-shikemeng@huaweicloud.com> References: <20241218154613.58754-1-shikemeng@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-nfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: cCh0CgA33a4mcGJnxKU7Ew--.4083S6 X-Coremail-Antispam: 1UD129KBjvdXoWrtFWxKF1Dtr4kXFy7Xw4kCrg_yoWxtwb_Za 10yr1vyr1UCrn7XwsIkFs8tryjkryvqF9rKFW8Ja43ur10qry5WF4vqr15Aa4kZw4ayFy7 JF45Zr1Y9r15GjkaLaAFLSUrUUUUjb8apTn2vfkv8UJUUUU8Yxn0WfASr-VFAUDa7-sFnT 9fnUUIcSsGvfJTRUUUbqkYFVCjjxCrM7AC8VAFwI0_Wr0E3s1l1xkIjI8I6I8E6xAIw20E Y4v20xvaj40_Wr0E3s1l1IIY67AEw4v_Jr0_Jr4l87I20VAvwVAaII0Ic2I_JFv_Gryl82 xGYIkIc2x26280x7IE14v26r126s0DM28IrcIa0xkI8VCY1x0267AKxVW5JVCq3wA2ocxC 64kIII0Yj41l84x0c7CEw4AK67xGY2AK021l84ACjcxK6xIIjxv20xvE14v26F1j6w1UM2 8EF7xvwVC0I7IYx2IY6xkF7I0E14v26r4UJVWxJr1l84ACjcxK6I8E87Iv67AKxVW0oVCq 3wA2z4x0Y4vEx4A2jsIEc7CjxVAFwI0_GcCE3s1le2I262IYc4CY6c8Ij28IcVAaY2xG8w Aqx4xG64xvF2IEw4CE5I8CrVC2j2WlYx0E2Ix0cI8IcVAFwI0_Jr0_Jr4lYx0Ex4A2jsIE 14v26r1j6r4UMcvjeVCFs4IE7xkEbVWUJVW8JwACjcxG0xvY0x0EwIxGrwCY1x0262kKe7 AKxVWUAVWUtwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02 F40E14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_JF0_Jw 1lIxkGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUJVWUCwCI42IY6xIIjxv20xvEc7Cj xVAFwI0_Gr0_Cr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r 1j6r4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x07ja g4hUUUUU= X-CM-SenderInfo: 5vklyvpphqwq5kxd4v5lfo033gof0z/ Caller of xas_squash_marks() has ensured xas->xa_sibs is non-zero. Just remove repeat check of xas->xa_sibs in xas_squash_marks(). Signed-off-by: Kemeng Shi --- lib/xarray.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/lib/xarray.c b/lib/xarray.c index 3fac3f2cea9d..4231af284bd8 100644 --- a/lib/xarray.c +++ b/lib/xarray.c @@ -128,9 +128,6 @@ static void xas_squash_marks(const struct xa_state *xas) unsigned int mark = 0; unsigned int limit = xas->xa_offset + xas->xa_sibs + 1; - if (!xas->xa_sibs) - return; - do { unsigned long *marks = xas->xa_node->marks[mark]; if (find_next_bit(marks, limit, xas->xa_offset + 1) == limit) From patchwork Wed Dec 18 15:46:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kemeng Shi X-Patchwork-Id: 13913115 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 65423154457; Wed, 18 Dec 2024 06:48:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734504494; cv=none; b=Gujw4Rgj9XtDnfbCkyprj/xSSGWbU+tj8tGaIVVMGEZCQUVYF5b0qwK5vNmtrhRGaZHjuuH/Y7nFPLzOzxbScKrLEW4Hg9OSJ1Ke/cdUxS8H0wOtbrcf+pNB+Zij9m55F6xiPqH2OM3Yvtnjy0cc6uhEsZ9PNeNS/nzcpHy8sdU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734504494; c=relaxed/simple; bh=tyU/E7IdI0PoI6xnGSKBcD21KiHHsKL6ZTo0hZJt4KA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=VGuSUL5aWQiTjVmoHqGpR3D9xofx/OdHjNzf6DJoiEY1/8uIdCj5oselUa5Cpj6kyK0u0hFXPwonZRpoyhompdPqPJYkJRDqzQeRluTA5ryPliZKYjFesRyvnVYp5srkdXTMACQIyRByizbDiGq6My6ZDpgqMXqFfgMPODcYw70= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.93.142]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4YCklH0Swkz4f3jqZ; Wed, 18 Dec 2024 14:47:55 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.112]) by mail.maildlp.com (Postfix) with ESMTP id 7E5EE1A018D; Wed, 18 Dec 2024 14:48:09 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.101.6]) by APP1 (Coremail) with SMTP id cCh0CgA33a4mcGJnxKU7Ew--.4083S7; Wed, 18 Dec 2024 14:48:09 +0800 (CST) From: Kemeng Shi To: akpm@linux-foundation.org, willy@infradead.org Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-nfs@vger.kernel.org Subject: [PATCH v4 5/5] Xarray: use xa_mark_t in xas_squash_marks() to keep code consistent Date: Wed, 18 Dec 2024 23:46:13 +0800 Message-Id: <20241218154613.58754-6-shikemeng@huaweicloud.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20241218154613.58754-1-shikemeng@huaweicloud.com> References: <20241218154613.58754-1-shikemeng@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-nfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: cCh0CgA33a4mcGJnxKU7Ew--.4083S7 X-Coremail-Antispam: 1UD129KBjvJXoW7Cw1UuF13JryrGFWxCFyfWFg_yoW8GrWkpF 97C3s8Ka1xA3WUKrnFvan7t345Ja1kK3yjyr4xGwnayFZ8Gr1Yqay7tryjqFnxGFy8ZFy3 Cr1Fg3y5Wa1UZw7anT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUPab4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M280x2IEY4vEnII2IxkI6r1a6r45M2 8IrcIa0xkI8VA2jI8067AKxVWUAVCq3wA2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAv FVAK0II2c7xJM28CjxkF64kEwVA0rcxSw2x7M28EF7xvwVC0I7IYx2IY67AKxVW7JVWDJw A2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE 3s1l84ACjcxK6I8E87Iv6xkF7I0E14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr2 1l5I8CrVACY4xI64kE6c02F40Ex7xfMcIj6xIIjxv20xvE14v26r1j6r18McIj6I8E87Iv 67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lc7CjxVAaw2 AFwI0_JF0_Jw1l42xK82IYc2Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAq x4xG67AKxVWUJVWUGwC20s026x8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r126r 1DMIIYrxkI7VAKI48JMIIF0xvE2Ix0cI8IcVAFwI0_JFI_Gr1lIxAIcVC0I7IYx2IY6xkF 7I0E14v26r4j6F4UMIIF0xvE42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxV WUJVW8JwCI42IY6I8E87Iv6xkF7I0E14v26r4j6r4UJbIYCTnIWIevJa73UjIFyTuYvjxU s3kuDUUUU X-CM-SenderInfo: 5vklyvpphqwq5kxd4v5lfo033gof0z/ Besides xas_squash_marks(), all functions use xa_mark_t type to iterate all possible marks. Use xa_mark_t in xas_squash_marks() to keep code consistent. Signed-off-by: Kemeng Shi --- lib/xarray.c | 20 ++++++++++++-------- 1 file changed, 12 insertions(+), 8 deletions(-) diff --git a/lib/xarray.c b/lib/xarray.c index 4231af284bd8..a74795911f1c 100644 --- a/lib/xarray.c +++ b/lib/xarray.c @@ -125,16 +125,20 @@ static inline void node_mark_all(struct xa_node *node, xa_mark_t mark) */ static void xas_squash_marks(const struct xa_state *xas) { - unsigned int mark = 0; + xa_mark_t mark = 0; unsigned int limit = xas->xa_offset + xas->xa_sibs + 1; - do { - unsigned long *marks = xas->xa_node->marks[mark]; - if (find_next_bit(marks, limit, xas->xa_offset + 1) == limit) - continue; - __set_bit(xas->xa_offset, marks); - bitmap_clear(marks, xas->xa_offset + 1, xas->xa_sibs); - } while (mark++ != (__force unsigned)XA_MARK_MAX); + for (;;) { + unsigned long *marks = node_marks(xas->xa_node, mark); + + if (find_next_bit(marks, limit, xas->xa_offset + 1) != limit) { + __set_bit(xas->xa_offset, marks); + bitmap_clear(marks, xas->xa_offset + 1, xas->xa_sibs); + } + if (mark == XA_MARK_MAX) + break; + mark_inc(mark); + } } /* extracts the offset within this node from the index */