From patchwork Thu Oct 24 13:19:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 13848999 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1BD77CE8E70 for ; Thu, 24 Oct 2024 13:23:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BE9C16B00A0; Thu, 24 Oct 2024 09:23:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B3AF96B00A3; Thu, 24 Oct 2024 09:23:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 93F926B00A0; Thu, 24 Oct 2024 09:23:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 61DA76B00A1 for ; Thu, 24 Oct 2024 09:23:08 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 124111608EA for ; Thu, 24 Oct 2024 13:22:47 +0000 (UTC) X-FDA: 82708560732.07.5F78A21 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) by imf16.hostedemail.com (Postfix) with ESMTP id E561B180015 for ; Thu, 24 Oct 2024 13:22:46 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=none; spf=pass (imf16.hostedemail.com: domain of yukuai1@huaweicloud.com designates 45.249.212.51 as permitted sender) smtp.mailfrom=yukuai1@huaweicloud.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1729775982; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=12gQoCMQxbNjF/y9GqZtMMO7QeQ0ccf8/Tb7/PtHy+4=; b=IrQBCp2KjxkoqNwGVy6IrQU+So47rMCvCrBopikLZMfh99kCY8HCUWiiA5Q3f77459kJ9l 0mR3rP2iU+e/AO+osfVGKjP4Hlw9cG6X3yo4HqehP+m9UgbOPiu5PAIJfSVO3Q9mIsHeDT wAp4hjhEycsB4Ra2QUw/tY2xLy8IB7M= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=none; spf=pass (imf16.hostedemail.com: domain of yukuai1@huaweicloud.com designates 45.249.212.51 as permitted sender) smtp.mailfrom=yukuai1@huaweicloud.com; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1729775982; a=rsa-sha256; cv=none; b=QiWtyJViFc2yDd3RNnToPcnLbHBdFojN2h+mhkMkf01UW5XnonyBYdzwQKZYe5H/7frr26 91tBtGpd1IpU7OGxbgZG57qjGi8Sdy78NVTQ9WybKoi4xRsn+zNZjOI1shz5TzA2ARa1UH 33HVskSmu4nez2UFQHakJ5EhX35JgLI= Received: from mail.maildlp.com (unknown [172.19.163.216]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4XZ66J6RN5z4f3kp5 for ; Thu, 24 Oct 2024 21:22:48 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id 46DC71A0196 for ; Thu, 24 Oct 2024 21:23:01 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP4 (Coremail) with SMTP id gCh0CgCHusYpShpn7tb6Ew--.444S10; Thu, 24 Oct 2024 21:23:00 +0800 (CST) From: Yu Kuai To: stable@vger.kernel.org, gregkh@linuxfoundation.org, harry.wentland@amd.com, sunpeng.li@amd.com, Rodrigo.Siqueira@amd.com, alexander.deucher@amd.com, christian.koenig@amd.com, Xinhui.Pan@amd.com, airlied@gmail.com, daniel@ffwll.ch, viro@zeniv.linux.org.uk, brauner@kernel.org, Liam.Howlett@oracle.com, akpm@linux-foundation.org, hughd@google.com, willy@infradead.org, sashal@kernel.org, srinivasan.shanmugam@amd.com, chiahsuan.chung@amd.com, mingo@kernel.org, mgorman@techsingularity.net, yukuai3@huawei.com, chengming.zhou@linux.dev, zhangpeng.00@bytedance.com, chuck.lever@oracle.com Cc: amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, maple-tree@lists.infradead.org, linux-mm@kvack.org, yukuai1@huaweicloud.com, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH 6.6 06/28] maple_tree: remove unnecessary default labels from switch statements Date: Thu, 24 Oct 2024 21:19:47 +0800 Message-Id: <20241024132009.2267260-7-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20241024132009.2267260-1-yukuai1@huaweicloud.com> References: <20241024132009.2267260-1-yukuai1@huaweicloud.com> MIME-Version: 1.0 X-CM-TRANSID: gCh0CgCHusYpShpn7tb6Ew--.444S10 X-Coremail-Antispam: 1UD129KBjvJXoWxXw1rJr45Gw4xtrW5XFW8WFg_yoWrZF43pa 1UGryDK39rtF1vk3y0yr4fX3WfWwsxGay2ya1qgw1vvF45Cr93XFnYka4xCF15CaySvFW3 ta1Yv348C3ZrZrDanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUmq14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JF0E3s1l82xGYI kIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2 z4x0Y4vE2Ix0cI8IcVAFwI0_Ar0_tr1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F 4UJwA2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq 3wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0I7 IYx2IY67AKxVWUGVWUXwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r4U M4x0Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwACI402YVCY1x02628vn2 kIc2xKxwCY1x0262kKe7AKxVWrXVW3AwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkE bVWUJVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67 AF67kF1VAFwI0_Wrv_Gr1UMIIYrxkI7VAKI48JMIIF0xvE2Ix0cI8IcVAFwI0_Gr0_Xr1l IxAIcVC0I7IYx2IY6xkF7I0E14v26r4UJVWxJr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r 1xMIIF0xvEx4A2jsIE14v26r4j6F4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr1j6F4UJbIY CTnIWIevJa73UjIFyTuYvjTRAR6zUUUUU X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: E561B180015 X-Stat-Signature: f6ooxw9tf77d8fg5yrd6ourcfxw69s8z X-Rspam-User: X-HE-Tag: 1729776166-354432 X-HE-Meta: U2FsdGVkX1+SuhdIiZSOSrTuJTj3njB4V+IQz/sMNLpH/Dv8uaJsUzx0JpJgsiLgDG+0xPM1lsSO+BDaoHiXYhEFtuDuq044/7BDevWSqNl/Jg2P4HgO/mcSPbM93lrOrST46yIY572NOnISwLGP7kyp1MPcqXcWStCAj7NCuqEh+m0oE08TorGEZwe52qeeIF5oRXPiVwjyyjQp2Nxsq89tdwxFxq5ymMI63/FoNTDXH16p0QV7H9kgV8K3r+C9XHIgs0Wj3CA2ea0tArFDEuj9VXnekYxsBt1o4I8P7AwUHwilXC+OiE5/5HjAFuoEcd2PgiS1OZQZBtOWwaE+AE7ooN3TSBzyYkhcFeC+hVCx+CeJzOeps4IxLRq2EK9Nfz/lME0YZRLa4IsarbE8ufa984hubw6evSvF9x18xatlZVcJLg68puc439hHVqYm5t8+MtbVkNxw5Y2jyVG3lKa8uRUyJUgQjJPZkHFjT7ChxfGBwt+0qx/VThgjNIcURbOKm3hQwhPS4JVM9CKkIfqVBtosrtW2CWVOSXiAvhpt6qIVQ68+P/GrSKm8g9SDwBOnx6X0M/YgTIKPUIZMz6wsYsW9v2zfNoba5tGG/3Fb+moOBPcK6SP3ec3IC3p/KGqNUHqqdMS1w+fgzs3RWara81UldSZpYPy7C+Vfl4a52W+TISdhrReUvAH/YwMBh5lAXHmouZ0yhyygVfj2D15t+JmbqBGkZpuY9VVh+/P9wpQQmokTMAgkihWGP40l/6OHP8OR+Jf+k+SI6m7D9PweuqiQHhpP8D4ZhxnD6EGFxsr0bQU0+s+pX/ozwo6s3wllIANiLWGXOGMYoUGo5LK+YKpWIzEAmho0LzPl6wD/2uHzdjIv9I1BSJ0LuIbmic7xhZqFGxOzu+telEliC13h/IP/guM7HufdXqx+x9WRNXwOb/M6kll7twjTBys37EGcCeFntQL3hy/22Tn PPWmdYV0 uu+meBE1JBUtkUb7LzzFvif5bPNIB1YsHilU869Fy83X4Zy6SxaAHnRhcYahRvqTTJDsMVfeBMi0ITThGz8JvP7eYcEsD/jNJOUKjG2pVwPF5ouwMB3DnI7kvJGoSgEK0SWb5PQb30oH1qk+GW8hVwAkVt9b3nO0Vb2v1nGsNIwWA8UcIHydQ8CGIpk2fFwV6n8bzbMwVc+Gan/ofQ1gXxH+w+JBqHQ3KABEQQ03erCpVe1RPpYmgaFtSlXWlwQZQm/DjSna7/fbGPhyodS8h4+JIin4eR8prsMwkkWRmfZdp3gJgSpd6cEBcTQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Liam R. Howlett" commit 37a8ab24d3d4c465b070bd704e2ad2fa277df9d7 upstream. Patch series "maple_tree: iterator state changes". These patches have some general cleanup and a change to separate the maple state status tracking from the maple state node. The maple state status change allows for walks to continue from previous places when the status needs to be recorded to make logical sense for the next call to the maple state. For instance, it allows for prev/next to function in a way that better resembles the linked list. It also allows switch statements to be used to detect missed states during compile, and the addition of fast-path "active" state is cleaner as an enum. While making the status change, perf showed some very small (one line) functions that were not inlined even with the inline key word. Making these small functions __always_inline is less expensive according to perf. As part of that change, some inlines have been dropped from larger functions. Perf also showed that the commonly used mas_for_each() iterator was spending a lot of time finding the end of the node. This series introduces caching of the end of the node in the maple state (and updating it during writes). This caching along with the inline changes yielded at 23.25% improvement on the BENCH_MAS_FOR_EACH maple tree test framework benchmark. I've also included a change to mtree_range_walk and mtree_lookup_walk to take advantage of Peng's change [1] to the initial pivot setup. mmtests did not produce any significant gains. [1] https://lore.kernel.org/all/20230711035444.526-1-zhangpeng.00@bytedance.com/T/#u This patch (of 12): Removing the default types from the switch statements will cause compile warnings on missing cases. Link: https://lkml.kernel.org/r/20231101171629.3612299-2-Liam.Howlett@oracle.com Signed-off-by: Liam R. Howlett Suggested-by: Andrew Morton Signed-off-by: Andrew Morton Signed-off-by: Yu Kuai --- lib/maple_tree.c | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-) diff --git a/lib/maple_tree.c b/lib/maple_tree.c index 97a610307d38..9de2e3dfdfcc 100644 --- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -771,7 +771,6 @@ static inline void mte_set_pivot(struct maple_enode *mn, unsigned char piv, BUG_ON(piv >= mt_pivots[type]); switch (type) { - default: case maple_range_64: case maple_leaf_64: node->mr64.pivot[piv] = val; @@ -795,7 +794,6 @@ static inline void mte_set_pivot(struct maple_enode *mn, unsigned char piv, static inline void __rcu **ma_slots(struct maple_node *mn, enum maple_type mt) { switch (mt) { - default: case maple_arange_64: return mn->ma64.slot; case maple_range_64: @@ -804,6 +802,8 @@ static inline void __rcu **ma_slots(struct maple_node *mn, enum maple_type mt) case maple_dense: return mn->slot; } + + return NULL; } static inline bool mt_write_locked(const struct maple_tree *mt) @@ -7013,7 +7013,6 @@ static void mt_dump_range(unsigned long min, unsigned long max, else pr_info("%.*s%lx-%lx: ", depth * 2, spaces, min, max); break; - default: case mt_dump_dec: if (min == max) pr_info("%.*s%lu: ", depth * 2, spaces, min); @@ -7053,7 +7052,6 @@ static void mt_dump_range64(const struct maple_tree *mt, void *entry, case mt_dump_hex: pr_cont("%p %lX ", node->slot[i], node->pivot[i]); break; - default: case mt_dump_dec: pr_cont("%p %lu ", node->slot[i], node->pivot[i]); } @@ -7083,7 +7081,6 @@ static void mt_dump_range64(const struct maple_tree *mt, void *entry, pr_err("node %p last (%lx) > max (%lx) at pivot %d!\n", node, last, max, i); break; - default: case mt_dump_dec: pr_err("node %p last (%lu) > max (%lu) at pivot %d!\n", node, last, max, i); @@ -7108,7 +7105,6 @@ static void mt_dump_arange64(const struct maple_tree *mt, void *entry, case mt_dump_hex: pr_cont("%lx ", node->gap[i]); break; - default: case mt_dump_dec: pr_cont("%lu ", node->gap[i]); } @@ -7119,7 +7115,6 @@ static void mt_dump_arange64(const struct maple_tree *mt, void *entry, case mt_dump_hex: pr_cont("%p %lX ", node->slot[i], node->pivot[i]); break; - default: case mt_dump_dec: pr_cont("%p %lu ", node->slot[i], node->pivot[i]); }