From patchwork Mon Nov 13 17:37:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13454236 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 05FF4224D6; Mon, 13 Nov 2023 17:37:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="ZK51rE6a" Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C2D51171C; Mon, 13 Nov 2023 09:37:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1699897047; x=1731433047; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=PEsWLDyeJovIwkzQRPz9VNsiJJHg4EVtH32TvYtHmOU=; b=ZK51rE6ac7ztB7qn6sWcdwEiRJaPYZVZ7qOLFpUYSv82XVo7IYlSU0kO gkoEBp8Cen1K3HqLMsN1ExEJCJzmzaO7q6s5rsyGOlqe6NHVwxP3OrVtK 9jQFvEltDcPvdfR33J6WqkRGFPuBCOdLrCs/gxB6oPch/uIP3yj02pQpU c4OVYDZk0WhFmqNhTOyc+sU1tq4GgpRtc4Qs5JrzbiKnNQopJkDENCyhg LSu0Z+0lspDO58dhOrbLluckHnjZ3kG3icfve82GF776w7+7Vu3WE85Sn E1wUXGYiD5aB8pORVoR6oELRN67lPkTaoGI06w1usVSHlAz9ibfGUCKxg w==; X-IronPort-AV: E=McAfee;i="6600,9927,10893"; a="370671507" X-IronPort-AV: E=Sophos;i="6.03,299,1694761200"; d="scan'208";a="370671507" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Nov 2023 09:37:27 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10893"; a="1095812628" X-IronPort-AV: E=Sophos;i="6.03,299,1694761200"; d="scan'208";a="1095812628" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmsmga005.fm.intel.com with ESMTP; 13 Nov 2023 09:37:24 -0800 From: Alexander Lobakin To: Yury Norov Cc: Alexander Lobakin , Andy Shevchenko , Rasmus Villemoes , Alexander Potapenko , Jakub Kicinski , Przemek Kitszel , netdev@vger.kernel.org, linux-btrfs@vger.kernel.org, dm-devel@redhat.com, ntfs3@lists.linux.dev, linux-s390@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 01/11] bitops: add missing prototype check Date: Mon, 13 Nov 2023 18:37:07 +0100 Message-ID: <20231113173717.927056-2-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231113173717.927056-1-aleksander.lobakin@intel.com> References: <20231113173717.927056-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Commit 8238b4579866 ("wait_on_bit: add an acquire memory barrier") added a new bitop, test_bit_acquire(), with proper wrapping in order to try to optimize it at compile-time, but missed the list of bitops used for checking their prototypes a bit below. The functions added have consistent prototypes, so that no more changes are required and no functional changes take place. Fixes: 8238b4579866 ("wait_on_bit: add an acquire memory barrier") Reviewed-by: Przemek Kitszel Signed-off-by: Alexander Lobakin --- include/linux/bitops.h | 1 + 1 file changed, 1 insertion(+) diff --git a/include/linux/bitops.h b/include/linux/bitops.h index 2ba557e067fe..f7f5a783da2a 100644 --- a/include/linux/bitops.h +++ b/include/linux/bitops.h @@ -80,6 +80,7 @@ __check_bitop_pr(__test_and_set_bit); __check_bitop_pr(__test_and_clear_bit); __check_bitop_pr(__test_and_change_bit); __check_bitop_pr(test_bit); +__check_bitop_pr(test_bit_acquire); #undef __check_bitop_pr From patchwork Mon Nov 13 17:37:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13454237 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 41987224EF; Mon, 13 Nov 2023 17:37:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="IMmvDi4l" Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2EDC41729; Mon, 13 Nov 2023 09:37:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1699897051; x=1731433051; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=7ETz4iDardXwy+dtrWCXWun7Cu3NStxkY3YBTdJOejo=; b=IMmvDi4lKKU5s4ZXNZLYJaPTqSAOD0UfVhtJORu/M+daNEQtTJDHV8v/ z6dtPZAenKiZ52PefXatj5DJQUFOMK+FgwfX85p7vZyOuHlqWr91d+byB X16Bt+V4jjes9c9IA7RhwrG6sCeh361i1AgD1nImUisX9ZYK7FzGfaGyF URZUVXUBya+YuWvs6+R0ocIiRG64+0mGol2sAfch1mylRyJs5NPERACTp /I4SbvuZ14km4FxIsKs8XG5Yrd13gjpOF5DorkiLg0hvjtzIZHbRk0IYb NqXfpFAtZzVLh0Cc5PpYsRCRPmxAyPREY/juNtWA8UFiu/07G9ja5NiyA A==; X-IronPort-AV: E=McAfee;i="6600,9927,10893"; a="370671524" X-IronPort-AV: E=Sophos;i="6.03,299,1694761200"; d="scan'208";a="370671524" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Nov 2023 09:37:30 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10893"; a="1095812637" X-IronPort-AV: E=Sophos;i="6.03,299,1694761200"; d="scan'208";a="1095812637" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmsmga005.fm.intel.com with ESMTP; 13 Nov 2023 09:37:27 -0800 From: Alexander Lobakin To: Yury Norov Cc: Alexander Lobakin , Andy Shevchenko , Rasmus Villemoes , Alexander Potapenko , Jakub Kicinski , Przemek Kitszel , netdev@vger.kernel.org, linux-btrfs@vger.kernel.org, dm-devel@redhat.com, ntfs3@lists.linux.dev, linux-s390@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 02/11] bitops: make BYTES_TO_BITS() treewide-available Date: Mon, 13 Nov 2023 18:37:08 +0100 Message-ID: <20231113173717.927056-3-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231113173717.927056-1-aleksander.lobakin@intel.com> References: <20231113173717.927056-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Avoid open-coding that simple expression each time by moving BYTES_TO_BITS() from the probes code to to export it to the rest of the kernel. Simplify the macro while at it. `BITS_PER_LONG / sizeof(long)` always equals to %BITS_PER_BYTE, regardless of the target architecture. Do the same for the tools ecosystem as well (incl. its version of bitops.h). The previous implementation had its implicit type of long, while the new one is int, so adjust the format literal accordingly in the perf code. Suggested-by: Andy Shevchenko Reviewed-by: Przemek Kitszel Signed-off-by: Alexander Lobakin --- include/linux/bitops.h | 2 ++ kernel/trace/trace_probe.c | 2 -- tools/include/linux/bitops.h | 2 ++ tools/perf/util/probe-finder.c | 4 +--- 4 files changed, 5 insertions(+), 5 deletions(-) diff --git a/include/linux/bitops.h b/include/linux/bitops.h index f7f5a783da2a..e0cd09eb91cd 100644 --- a/include/linux/bitops.h +++ b/include/linux/bitops.h @@ -21,6 +21,8 @@ #define BITS_TO_U32(nr) __KERNEL_DIV_ROUND_UP(nr, BITS_PER_TYPE(u32)) #define BITS_TO_BYTES(nr) __KERNEL_DIV_ROUND_UP(nr, BITS_PER_TYPE(char)) +#define BYTES_TO_BITS(nb) ((nb) * BITS_PER_BYTE) + extern unsigned int __sw_hweight8(unsigned int w); extern unsigned int __sw_hweight16(unsigned int w); extern unsigned int __sw_hweight32(unsigned int w); diff --git a/kernel/trace/trace_probe.c b/kernel/trace/trace_probe.c index 4dc74d73fc1d..2b743c1e37db 100644 --- a/kernel/trace/trace_probe.c +++ b/kernel/trace/trace_probe.c @@ -1053,8 +1053,6 @@ parse_probe_arg(char *arg, const struct fetch_type *type, return ret; } -#define BYTES_TO_BITS(nb) ((BITS_PER_LONG * (nb)) / sizeof(long)) - /* Bitfield type needs to be parsed into a fetch function */ static int __parse_bitfield_probe_arg(const char *bf, const struct fetch_type *t, diff --git a/tools/include/linux/bitops.h b/tools/include/linux/bitops.h index f18683b95ea6..bc6600466e7b 100644 --- a/tools/include/linux/bitops.h +++ b/tools/include/linux/bitops.h @@ -20,6 +20,8 @@ #define BITS_TO_U32(nr) DIV_ROUND_UP(nr, BITS_PER_TYPE(u32)) #define BITS_TO_BYTES(nr) DIV_ROUND_UP(nr, BITS_PER_TYPE(char)) +#define BYTES_TO_BITS(nb) ((nb) * BITS_PER_BYTE) + extern unsigned int __sw_hweight8(unsigned int w); extern unsigned int __sw_hweight16(unsigned int w); extern unsigned int __sw_hweight32(unsigned int w); diff --git a/tools/perf/util/probe-finder.c b/tools/perf/util/probe-finder.c index f171360b0ef4..f967379ca42e 100644 --- a/tools/perf/util/probe-finder.c +++ b/tools/perf/util/probe-finder.c @@ -304,8 +304,6 @@ static int convert_variable_location(Dwarf_Die *vr_die, Dwarf_Addr addr, return ret2; } -#define BYTES_TO_BITS(nb) ((nb) * BITS_PER_LONG / sizeof(long)) - static int convert_variable_type(Dwarf_Die *vr_die, struct probe_trace_arg *tvar, const char *cast, bool user_access) @@ -335,7 +333,7 @@ static int convert_variable_type(Dwarf_Die *vr_die, total = dwarf_bytesize(vr_die); if (boffs < 0 || total < 0) return -ENOENT; - ret = snprintf(buf, 16, "b%d@%d/%zd", bsize, boffs, + ret = snprintf(buf, 16, "b%d@%d/%d", bsize, boffs, BYTES_TO_BITS(total)); goto formatted; } From patchwork Mon Nov 13 17:37:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13454238 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CA4CF225DA; Mon, 13 Nov 2023 17:37:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="fNJ/diZv" Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 339461986; Mon, 13 Nov 2023 09:37:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1699897054; x=1731433054; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=BTYwTzSLV6tmf650iMLS0KU62odkkMMGO5sugWGCxr8=; b=fNJ/diZvQWu0shGN6TrY1oO9f7E3pVQsTWHAufMti7ko16/lrT7buFNY rloqB7GGkiei+e3Pm0onUgwyqG9+/R5YVSnVvI3OthtETQEVcMP/mRBzW c9LK4NrF9xIclww7LlhQMNw4KV+o/dIqLzwJRfXaGKb+20/1cUOzA2u0s F7iYolZbMveS8zv2LmudzWzJdJdpGxJQHhed4yoeNKP6fuhCuPuF0lOW8 RIEI7LsKXgC/v9wk7GfP21XAXmUZjux5JRwbiaXiMx/piStjzAUG6krbr 3rC/pzVsMj2SkjCfEvgZN4Rm6yW0cEsPmVlglxQmgFZOsxDeprHwYbLKb Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10893"; a="370671535" X-IronPort-AV: E=Sophos;i="6.03,299,1694761200"; d="scan'208";a="370671535" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Nov 2023 09:37:33 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10893"; a="1095812650" X-IronPort-AV: E=Sophos;i="6.03,299,1694761200"; d="scan'208";a="1095812650" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmsmga005.fm.intel.com with ESMTP; 13 Nov 2023 09:37:30 -0800 From: Alexander Lobakin To: Yury Norov Cc: Alexander Lobakin , Andy Shevchenko , Rasmus Villemoes , Alexander Potapenko , Jakub Kicinski , Przemek Kitszel , netdev@vger.kernel.org, linux-btrfs@vger.kernel.org, dm-devel@redhat.com, ntfs3@lists.linux.dev, linux-s390@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 03/11] bitops: let the compiler optimize {__,}assign_bit() Date: Mon, 13 Nov 2023 18:37:09 +0100 Message-ID: <20231113173717.927056-4-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231113173717.927056-1-aleksander.lobakin@intel.com> References: <20231113173717.927056-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Since commit b03fc1173c0c ("bitops: let optimize out non-atomic bitops on compile-time constants"), the compilers are able to expand inline bitmap operations to compile-time initializers when possible. However, during the round of replacement if-__set-else-__clear with __assign_bit() as per Andy's advice, bloat-o-meter showed +1024 bytes difference in object code size for one module (even one function), where the pattern: DECLARE_BITMAP(foo) = { }; // on the stack, zeroed if (a) __set_bit(const_bit_num, foo); if (b) __set_bit(another_const_bit_num, foo); ... is heavily used, although there should be no difference: the bitmap is zeroed, so the second half of __assign_bit() should be compiled-out as a no-op. I either missed the fact that __assign_bit() has bitmap pointer marked as `volatile` (as we usually do for bitops) or was hoping that the compilers would at least try to look past the `volatile` for __always_inline functions. Anyhow, due to that attribute, the compilers were always compiling the whole expression and no mentioned compile-time optimizations were working. Convert __assign_bit() to a macro since it's a very simple if-else and all of the checks are performed inside __set_bit() and __clear_bit(), thus that wrapper has to be as transparent as possible. After that change, despite it showing only -20 bytes change for vmlinux (due to that it's still relatively unpopular), no drastic code size changes happen when replacing if-set-else-clear for onstack bitmaps with __assign_bit(), meaning the compiler now expands them to the actual operations will all the expected optimizations. Atomic assign_bit() is less affected due to its nature, but let's convert it to a macro as well to keep the code consistent and not leave a place for possible suboptimal codegen. Moreover, with certain kernel configuration it actually gives some saves (x86): do_ip_setsockopt 4154 4099 -55 Suggested-by: Yury Norov # assign_bit(), too Cc: Andy Shevchenko Reviewed-by: Przemek Kitszel Signed-off-by: Alexander Lobakin --- include/linux/bitops.h | 20 ++++---------------- 1 file changed, 4 insertions(+), 16 deletions(-) diff --git a/include/linux/bitops.h b/include/linux/bitops.h index e0cd09eb91cd..b25dc8742124 100644 --- a/include/linux/bitops.h +++ b/include/linux/bitops.h @@ -275,23 +275,11 @@ static inline unsigned long fns(unsigned long word, unsigned int n) * @addr: the address to start counting from * @value: the value to assign */ -static __always_inline void assign_bit(long nr, volatile unsigned long *addr, - bool value) -{ - if (value) - set_bit(nr, addr); - else - clear_bit(nr, addr); -} +#define assign_bit(nr, addr, value) \ + ((value) ? set_bit((nr), (addr)) : clear_bit((nr), (addr))) -static __always_inline void __assign_bit(long nr, volatile unsigned long *addr, - bool value) -{ - if (value) - __set_bit(nr, addr); - else - __clear_bit(nr, addr); -} +#define __assign_bit(nr, addr, value) \ + ((value) ? __set_bit((nr), (addr)) : __clear_bit((nr), (addr))) /** * __ptr_set_bit - Set bit in a pointer's value From patchwork Mon Nov 13 17:37:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13454239 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8E4D122EFB; Mon, 13 Nov 2023 17:37:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Xx09J7bU" Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2DFE9173A; Mon, 13 Nov 2023 09:37:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1699897057; x=1731433057; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=WC84IXj/Kc3tJW0HYnQfLmcK97LUidMkeT2CxDdJbZY=; b=Xx09J7bUtHewXrv6S6FedXzOW2w/cWkKNhokYYmyS+OmJ24SjKRE/V3L Il59ylYZnuQGLAOjE8FkmnwELamE7fxQBqZO9cAEMRv0NgNngpyOJynVM qU9h6nba3AYC+LN9+t+p/G2uxQOdn84pabShT+et/Zy/VYgpN7++5E32W P7lKrhLFPQVy5lmKZEx3bY084ItqtcLO3DWG4Yy9MXjhpizkA/3Zr2aqU pIrrQN0VbIsjZ8HVJgt5hv908RediSxDLCsPSwaiLgtMf5THBUN7kkpx1 Pq42bfbl46oeguW4KP1EUrMNOmOXUKD7kXtFAgHlC+Yz/5btvUB3fBNrD g==; X-IronPort-AV: E=McAfee;i="6600,9927,10893"; a="370671553" X-IronPort-AV: E=Sophos;i="6.03,299,1694761200"; d="scan'208";a="370671553" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Nov 2023 09:37:36 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10893"; a="1095812659" X-IronPort-AV: E=Sophos;i="6.03,299,1694761200"; d="scan'208";a="1095812659" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmsmga005.fm.intel.com with ESMTP; 13 Nov 2023 09:37:34 -0800 From: Alexander Lobakin To: Yury Norov Cc: Alexander Lobakin , Andy Shevchenko , Rasmus Villemoes , Alexander Potapenko , Jakub Kicinski , Przemek Kitszel , netdev@vger.kernel.org, linux-btrfs@vger.kernel.org, dm-devel@redhat.com, ntfs3@lists.linux.dev, linux-s390@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 04/11] linkmode: convert linkmode_{test,set,clear,mod}_bit() to macros Date: Mon, 13 Nov 2023 18:37:10 +0100 Message-ID: <20231113173717.927056-5-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231113173717.927056-1-aleksander.lobakin@intel.com> References: <20231113173717.927056-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Since commit b03fc1173c0c ("bitops: let optimize out non-atomic bitops on compile-time constants"), the non-atomic bitops are macros which can be expanded by the compilers into compile-time expressions, which will result in better optimized object code. Unfortunately, turned out that passing `volatile` to those macros discards any possibility of optimization, as the compilers then don't even try to look whether the passed bitmap is known at compilation time. In addition to that, the mentioned linkmode helpers are marked with `inline`, not `__always_inline`, meaning that it's not guaranteed some compiler won't uninline them for no reason, which will also effectively prevent them from being optimized (it's a well-known thing the compilers sometimes uninline `2 + 2`). Convert linkmode_*_bit() from inlines to macros. Their calling convention are 1:1 with the corresponding bitops, so that it's not even needed to enumerate and map the arguments, only the names. No changes in vmlinux' object code (compiled by LLVM for x86_64) whatsoever, but that doesn't necessarily means the change is meaningless. Reviewed-by: Przemek Kitszel Acked-by: Jakub Kicinski Signed-off-by: Alexander Lobakin Reviewed-by: Andrew Lunn --- include/linux/linkmode.h | 27 ++++----------------------- 1 file changed, 4 insertions(+), 23 deletions(-) diff --git a/include/linux/linkmode.h b/include/linux/linkmode.h index 7303b4bc2ce0..f231e2edbfa5 100644 --- a/include/linux/linkmode.h +++ b/include/linux/linkmode.h @@ -38,29 +38,10 @@ static inline int linkmode_andnot(unsigned long *dst, const unsigned long *src1, return bitmap_andnot(dst, src1, src2, __ETHTOOL_LINK_MODE_MASK_NBITS); } -static inline void linkmode_set_bit(int nr, volatile unsigned long *addr) -{ - __set_bit(nr, addr); -} - -static inline void linkmode_clear_bit(int nr, volatile unsigned long *addr) -{ - __clear_bit(nr, addr); -} - -static inline void linkmode_mod_bit(int nr, volatile unsigned long *addr, - int set) -{ - if (set) - linkmode_set_bit(nr, addr); - else - linkmode_clear_bit(nr, addr); -} - -static inline int linkmode_test_bit(int nr, const volatile unsigned long *addr) -{ - return test_bit(nr, addr); -} +#define linkmode_test_bit test_bit +#define linkmode_set_bit __set_bit +#define linkmode_clear_bit __clear_bit +#define linkmode_mod_bit __assign_bit static inline void linkmode_set_bit_array(const int *array, int array_size, unsigned long *addr) From patchwork Mon Nov 13 17:37:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13454240 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6C9D022F0F; Mon, 13 Nov 2023 17:37:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="m8Gg+Anl" Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 030B71727; Mon, 13 Nov 2023 09:37:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1699897060; x=1731433060; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Mw29f1QIt7U37r6p+V4Yyfpi9cV1+O/PtDoSzXpXBFY=; b=m8Gg+Anln4XxjOmcuK9xgdfYjKks8xW8B9k8uFXI0D2Ra045UEBnkGfZ HUnEOWvst4vy21zTnBnC44xC5sHR3fUycaEgbifUfwqH/Ayecqd5VLj6a UQ8ImBnkuEJdXwCoXalrMhy4cFs+USaXegFd7pj1bJpsOrAVw4xNHKf9+ E8O+gCGQ43RZD+Wxqr7jWN93jTV76zKwulso3nnAX2OdSovZ2NJvXFbMy 8oqplPaU1cUNXpJI4vbOlg/qtrUFmaLwqkueZzs8n2NNIAYX13fOGGpex SxEG6ltz5eGSm+KagYTU7jqYuwDSVAo7jdTnwhmLX8vK6GUfbDVlwf4wm A==; X-IronPort-AV: E=McAfee;i="6600,9927,10893"; a="370671566" X-IronPort-AV: E=Sophos;i="6.03,299,1694761200"; d="scan'208";a="370671566" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Nov 2023 09:37:39 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10893"; a="1095812670" X-IronPort-AV: E=Sophos;i="6.03,299,1694761200"; d="scan'208";a="1095812670" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmsmga005.fm.intel.com with ESMTP; 13 Nov 2023 09:37:36 -0800 From: Alexander Lobakin To: Yury Norov Cc: Alexander Lobakin , Andy Shevchenko , Rasmus Villemoes , Alexander Potapenko , Jakub Kicinski , Przemek Kitszel , netdev@vger.kernel.org, linux-btrfs@vger.kernel.org, dm-devel@redhat.com, ntfs3@lists.linux.dev, linux-s390@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 05/11] s390/cio: rename bitmap_size() -> idset_bitmap_size() Date: Mon, 13 Nov 2023 18:37:11 +0100 Message-ID: <20231113173717.927056-6-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231113173717.927056-1-aleksander.lobakin@intel.com> References: <20231113173717.927056-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 bitmap_size() is a pretty generic name and one may want to use it for a generic bitmap API function. At the same time, its logic is not "generic", i.e. it's not just `nbits -> size of bitmap in bytes` converter as it would be expected from its name. Add the prefix 'idset_' used throughout the file where the function resides. Reviewed-by: Przemek Kitszel Signed-off-by: Alexander Lobakin --- idset_new() really wants its vmalloc() + memset() pair to be replaced with vzalloc(). --- drivers/s390/cio/idset.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/drivers/s390/cio/idset.c b/drivers/s390/cio/idset.c index 45f9c0736be4..0a1105a483bf 100644 --- a/drivers/s390/cio/idset.c +++ b/drivers/s390/cio/idset.c @@ -16,7 +16,7 @@ struct idset { unsigned long bitmap[]; }; -static inline unsigned long bitmap_size(int num_ssid, int num_id) +static inline unsigned long idset_bitmap_size(int num_ssid, int num_id) { return BITS_TO_LONGS(num_ssid * num_id) * sizeof(unsigned long); } @@ -25,11 +25,12 @@ static struct idset *idset_new(int num_ssid, int num_id) { struct idset *set; - set = vmalloc(sizeof(struct idset) + bitmap_size(num_ssid, num_id)); + set = vmalloc(sizeof(struct idset) + + idset_bitmap_size(num_ssid, num_id)); if (set) { set->num_ssid = num_ssid; set->num_id = num_id; - memset(set->bitmap, 0, bitmap_size(num_ssid, num_id)); + memset(set->bitmap, 0, idset_bitmap_size(num_ssid, num_id)); } return set; } @@ -41,7 +42,8 @@ void idset_free(struct idset *set) void idset_fill(struct idset *set) { - memset(set->bitmap, 0xff, bitmap_size(set->num_ssid, set->num_id)); + memset(set->bitmap, 0xff, + idset_bitmap_size(set->num_ssid, set->num_id)); } static inline void idset_add(struct idset *set, int ssid, int id) From patchwork Mon Nov 13 17:37:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13454241 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 347BE23743; Mon, 13 Nov 2023 17:37:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="NxeAFITB" Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1679819A7; Mon, 13 Nov 2023 09:37:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1699897063; x=1731433063; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=eF/TA+8XfUedFbqrzYuYUb78X0CRiVueaoHeJnxBGYU=; b=NxeAFITBG4Jvt+Iy28MRGpWfBFYZm3VCGSD4Y1GCF7v5uC+VzY9Dfq/E 0E9KAyLBe78x14OWLvDaAcnqt/ital6etz7s6WUQnu9/r7gyWg5RcIhkj yoS4BnLB5FCLN+BBlCNaDr8dOB7c+GLukji9Mvhw4MOCXfc+oakKrnmB4 HbtTUOxHX6WYKk/Im4cUg2/fQfb8EOfVuPuv/pdZi5TUBYIR0uxgNex/E xA8HmETtoicOQBQiR4Loxy0ef8Vbdzfdu6M+m1YsrUScvs4WQgGSBoGNW i98UCYPalWaozQjQvcXXR9n4tQxl8vOAsHJfSfkEuYv4RgWi+9lb76mWT A==; X-IronPort-AV: E=McAfee;i="6600,9927,10893"; a="370671586" X-IronPort-AV: E=Sophos;i="6.03,299,1694761200"; d="scan'208";a="370671586" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Nov 2023 09:37:42 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10893"; a="1095812682" X-IronPort-AV: E=Sophos;i="6.03,299,1694761200"; d="scan'208";a="1095812682" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmsmga005.fm.intel.com with ESMTP; 13 Nov 2023 09:37:39 -0800 From: Alexander Lobakin To: Yury Norov Cc: Alexander Lobakin , Andy Shevchenko , Rasmus Villemoes , Alexander Potapenko , Jakub Kicinski , Przemek Kitszel , netdev@vger.kernel.org, linux-btrfs@vger.kernel.org, dm-devel@redhat.com, ntfs3@lists.linux.dev, linux-s390@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 06/11] fs/ntfs3: add prefix to bitmap_size() and use BITS_TO_U64() Date: Mon, 13 Nov 2023 18:37:12 +0100 Message-ID: <20231113173717.927056-7-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231113173717.927056-1-aleksander.lobakin@intel.com> References: <20231113173717.927056-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 bitmap_size() is a pretty generic name and one may want to use it for a generic bitmap API function. At the same time, its logic is NTFS-specific, as it aligns to the sizeof(u64), not the sizeof(long) (although it uses ideologically right ALIGN() instead of division). Add the prefix 'ntfs3_' used for that FS (not just 'ntfs_' to not mix it with the legacy module) and use generic BITS_TO_U64() while at it. Suggested-by: Yury Norov # BITS_TO_U64() Reviewed-by: Przemek Kitszel Signed-off-by: Alexander Lobakin --- fs/ntfs3/bitmap.c | 4 ++-- fs/ntfs3/fsntfs.c | 2 +- fs/ntfs3/index.c | 11 ++++++----- fs/ntfs3/ntfs_fs.h | 4 ++-- fs/ntfs3/super.c | 2 +- 5 files changed, 12 insertions(+), 11 deletions(-) diff --git a/fs/ntfs3/bitmap.c b/fs/ntfs3/bitmap.c index 63f14a0232f6..a19a73ed630b 100644 --- a/fs/ntfs3/bitmap.c +++ b/fs/ntfs3/bitmap.c @@ -654,7 +654,7 @@ int wnd_init(struct wnd_bitmap *wnd, struct super_block *sb, size_t nbits) wnd->total_zeroes = nbits; wnd->extent_max = MINUS_ONE_T; wnd->zone_bit = wnd->zone_end = 0; - wnd->nwnd = bytes_to_block(sb, bitmap_size(nbits)); + wnd->nwnd = bytes_to_block(sb, ntfs3_bitmap_size(nbits)); wnd->bits_last = nbits & (wbits - 1); if (!wnd->bits_last) wnd->bits_last = wbits; @@ -1347,7 +1347,7 @@ int wnd_extend(struct wnd_bitmap *wnd, size_t new_bits) return -EINVAL; /* Align to 8 byte boundary. */ - new_wnd = bytes_to_block(sb, bitmap_size(new_bits)); + new_wnd = bytes_to_block(sb, ntfs3_bitmap_size(new_bits)); new_last = new_bits & (wbits - 1); if (!new_last) new_last = wbits; diff --git a/fs/ntfs3/fsntfs.c b/fs/ntfs3/fsntfs.c index fbfe21dbb425..e18de9c4c2fa 100644 --- a/fs/ntfs3/fsntfs.c +++ b/fs/ntfs3/fsntfs.c @@ -522,7 +522,7 @@ static int ntfs_extend_mft(struct ntfs_sb_info *sbi) ni->mi.dirty = true; /* Step 2: Resize $MFT::BITMAP. */ - new_bitmap_bytes = bitmap_size(new_mft_total); + new_bitmap_bytes = ntfs3_bitmap_size(new_mft_total); err = attr_set_size(ni, ATTR_BITMAP, NULL, 0, &sbi->mft.bitmap.run, new_bitmap_bytes, &new_bitmap_bytes, true, NULL); diff --git a/fs/ntfs3/index.c b/fs/ntfs3/index.c index cf92b2433f7a..e0cef8f4e414 100644 --- a/fs/ntfs3/index.c +++ b/fs/ntfs3/index.c @@ -1456,8 +1456,8 @@ static int indx_create_allocate(struct ntfs_index *indx, struct ntfs_inode *ni, alloc->nres.valid_size = alloc->nres.data_size = cpu_to_le64(data_size); - err = ni_insert_resident(ni, bitmap_size(1), ATTR_BITMAP, in->name, - in->name_len, &bitmap, NULL, NULL); + err = ni_insert_resident(ni, ntfs3_bitmap_size(1), ATTR_BITMAP, + in->name, in->name_len, &bitmap, NULL, NULL); if (err) goto out2; @@ -1518,8 +1518,9 @@ static int indx_add_allocate(struct ntfs_index *indx, struct ntfs_inode *ni, if (bmp) { /* Increase bitmap. */ err = attr_set_size(ni, ATTR_BITMAP, in->name, in->name_len, - &indx->bitmap_run, bitmap_size(bit + 1), - NULL, true, NULL); + &indx->bitmap_run, + ntfs3_bitmap_size(bit + 1), NULL, true, + NULL); if (err) goto out1; } @@ -2092,7 +2093,7 @@ static int indx_shrink(struct ntfs_index *indx, struct ntfs_inode *ni, if (in->name == I30_NAME) ni->vfs_inode.i_size = new_data; - bpb = bitmap_size(bit); + bpb = ntfs3_bitmap_size(bit); if (bpb * 8 == nbits) return 0; diff --git a/fs/ntfs3/ntfs_fs.h b/fs/ntfs3/ntfs_fs.h index f6706143d14b..16b84d605cd2 100644 --- a/fs/ntfs3/ntfs_fs.h +++ b/fs/ntfs3/ntfs_fs.h @@ -961,9 +961,9 @@ static inline bool run_is_empty(struct runs_tree *run) } /* NTFS uses quad aligned bitmaps. */ -static inline size_t bitmap_size(size_t bits) +static inline size_t ntfs3_bitmap_size(size_t bits) { - return ALIGN((bits + 7) >> 3, 8); + return BITS_TO_U64(bits) * sizeof(u64); } #define _100ns2seconds 10000000 diff --git a/fs/ntfs3/super.c b/fs/ntfs3/super.c index 9153dffde950..0248db1e5c01 100644 --- a/fs/ntfs3/super.c +++ b/fs/ntfs3/super.c @@ -1331,7 +1331,7 @@ static int ntfs_fill_super(struct super_block *sb, struct fs_context *fc) /* Check bitmap boundary. */ tt = sbi->used.bitmap.nbits; - if (inode->i_size < bitmap_size(tt)) { + if (inode->i_size < ntfs3_bitmap_size(tt)) { ntfs_err(sb, "$Bitmap is corrupted."); err = -EINVAL; goto put_inode_out; From patchwork Mon Nov 13 17:37:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13454242 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4131123743; Mon, 13 Nov 2023 17:37:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="bWFV7bmG" Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7EB7E19B1; Mon, 13 Nov 2023 09:37:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1699897066; x=1731433066; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=DtA/Z1IbwRhdE1Z9R1wqv+Q2gww15OYcKaj1wGIz64g=; b=bWFV7bmG9JKWOR3kJ0McR6JRmxGOGdAlTetpaSpqMZMZGrEcZ9Z8EInh W9dHTckVwQMZvIA2WwlJ9lReOHHNKB/xthjnyc4BdxxC+AG8KfFR8Hujb ZuWq0vJXhuBzjkNXbM6X6ZUGzM4PQ5it/KrvZlg2SsZm9Izig8LB6pc28 1BNhn8ILe3yGKol0ND3bwzhI4RrVPHsc1T/LU2P4fu3v8X0YFBTy53EsN BWvhTQNP6hEVfOslZmq8YHmLIoWyacVLv9QWl8kjHPZu1cemzYxIT4BS7 t1ewvpLTGbWqkwXCJyAq4vBLXm5FD4Hsiri/XytmoiSOXKxgiWhVliT3p Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10893"; a="370671606" X-IronPort-AV: E=Sophos;i="6.03,299,1694761200"; d="scan'208";a="370671606" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Nov 2023 09:37:46 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10893"; a="1095812704" X-IronPort-AV: E=Sophos;i="6.03,299,1694761200"; d="scan'208";a="1095812704" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmsmga005.fm.intel.com with ESMTP; 13 Nov 2023 09:37:42 -0800 From: Alexander Lobakin To: Yury Norov Cc: Alexander Lobakin , Andy Shevchenko , Rasmus Villemoes , Alexander Potapenko , Jakub Kicinski , Przemek Kitszel , netdev@vger.kernel.org, linux-btrfs@vger.kernel.org, dm-devel@redhat.com, ntfs3@lists.linux.dev, linux-s390@vger.kernel.org, linux-kernel@vger.kernel.org, David Sterba Subject: [PATCH v3 07/11] btrfs: rename bitmap_set_bits() -> btrfs_bitmap_set_bits() Date: Mon, 13 Nov 2023 18:37:13 +0100 Message-ID: <20231113173717.927056-8-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231113173717.927056-1-aleksander.lobakin@intel.com> References: <20231113173717.927056-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 bitmap_set_bits() does not start with the FS' prefix and may collide with a new generic helper one day. It operates with the FS-specific types, so there's no change those two could do the same thing. Just add the prefix to exclude such possible conflict. Reviewed-by: Przemek Kitszel Acked-by: David Sterba Signed-off-by: Alexander Lobakin Reviewed-by: Anand Jain --- fs/btrfs/free-space-cache.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c index 6f93c9a2c3e3..8f4949f0b5e2 100644 --- a/fs/btrfs/free-space-cache.c +++ b/fs/btrfs/free-space-cache.c @@ -1913,9 +1913,9 @@ static inline void bitmap_clear_bits(struct btrfs_free_space_ctl *ctl, ctl->free_space -= bytes; } -static void bitmap_set_bits(struct btrfs_free_space_ctl *ctl, - struct btrfs_free_space *info, u64 offset, - u64 bytes) +static void btrfs_bitmap_set_bits(struct btrfs_free_space_ctl *ctl, + struct btrfs_free_space *info, u64 offset, + u64 bytes) { unsigned long start, count, end; int extent_delta = 1; @@ -2251,7 +2251,7 @@ static u64 add_bytes_to_bitmap(struct btrfs_free_space_ctl *ctl, bytes_to_set = min(end - offset, bytes); - bitmap_set_bits(ctl, info, offset, bytes_to_set); + btrfs_bitmap_set_bits(ctl, info, offset, bytes_to_set); return bytes_to_set; From patchwork Mon Nov 13 17:37:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13454243 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 320F523743; Mon, 13 Nov 2023 17:37:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="l3wCLEkQ" Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 42DAB1728; Mon, 13 Nov 2023 09:37:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1699897069; x=1731433069; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=QmLTJQ3DGSbK+0NxwdNEeIapVkPN18QwsI1fWOyABbg=; b=l3wCLEkQ+KK4P/xdLlmCEmnzeuR4CGMqhWy94bZRKPP8zZuQgIoIlYR6 lB2ohGO0abivQ9+Y1i96iu+w/9pnbgBtyZ/s3mmTMi0JT415kg3qbDu9f CHNy1VDLMxBnpeNy6hkJiXIjfeuzV4OjrjUNj2q8g3AHBfcENw+FeALGW 1o7TPjaA/OEhPPZwyVu5dSEknBfHN8ZUhb1wn2wdpJMgN7XUZOaLvLXOX SiEmkpI0l9bQzyEAShaMko5D+9pZabktmnYD8AUpxCBrhcCvbzxzw+nXt 0IXv7eT2C8jeIP87qS3b5nrqxByxqIP2Sv7pD/zHmKePcMWIeMypU9RbM A==; X-IronPort-AV: E=McAfee;i="6600,9927,10893"; a="370671621" X-IronPort-AV: E=Sophos;i="6.03,299,1694761200"; d="scan'208";a="370671621" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Nov 2023 09:37:49 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10893"; a="1095812714" X-IronPort-AV: E=Sophos;i="6.03,299,1694761200"; d="scan'208";a="1095812714" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmsmga005.fm.intel.com with ESMTP; 13 Nov 2023 09:37:46 -0800 From: Alexander Lobakin To: Yury Norov Cc: Alexander Lobakin , Andy Shevchenko , Rasmus Villemoes , Alexander Potapenko , Jakub Kicinski , Przemek Kitszel , netdev@vger.kernel.org, linux-btrfs@vger.kernel.org, dm-devel@redhat.com, ntfs3@lists.linux.dev, linux-s390@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 08/11] tools: move alignment-related macros to new Date: Mon, 13 Nov 2023 18:37:14 +0100 Message-ID: <20231113173717.927056-9-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231113173717.927056-1-aleksander.lobakin@intel.com> References: <20231113173717.927056-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Currently, tools have *ALIGN*() macros scattered across the unrelated headers, as there are only 3 of them and they were added separately each time on an as-needed basis. Anyway, let's make it more consistent with the kernel headers and allow using those macros outside of the mentioned headers. Create inside the tools/ folder and include it where needed. Signed-off-by: Alexander Lobakin --- tools/include/linux/align.h | 11 +++++++++++ tools/include/linux/bitmap.h | 2 +- tools/include/linux/mm.h | 5 +---- 3 files changed, 13 insertions(+), 5 deletions(-) create mode 100644 tools/include/linux/align.h diff --git a/tools/include/linux/align.h b/tools/include/linux/align.h new file mode 100644 index 000000000000..62e5582bbb1f --- /dev/null +++ b/tools/include/linux/align.h @@ -0,0 +1,11 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _TOOLS_LINUX_ALIGN_H +#define _TOOLS_LINUX_ALIGN_H + +#include + +#define ALIGN(x, a) __ALIGN_KERNEL((x), (a)) +#define ALIGN_DOWN(x, a) __ALIGN_KERNEL((x) - ((a) - 1), (a)) +#define IS_ALIGNED(x, a) (((x) & ((typeof(x))(a) - 1)) == 0) + +#endif /* _TOOLS_LINUX_ALIGN_H */ diff --git a/tools/include/linux/bitmap.h b/tools/include/linux/bitmap.h index f3566ea0f932..8c6852dba04f 100644 --- a/tools/include/linux/bitmap.h +++ b/tools/include/linux/bitmap.h @@ -3,6 +3,7 @@ #define _TOOLS_LINUX_BITMAP_H #include +#include #include #include #include @@ -126,7 +127,6 @@ static inline bool bitmap_and(unsigned long *dst, const unsigned long *src1, #define BITMAP_MEM_ALIGNMENT (8 * sizeof(unsigned long)) #endif #define BITMAP_MEM_MASK (BITMAP_MEM_ALIGNMENT - 1) -#define IS_ALIGNED(x, a) (((x) & ((typeof(x))(a) - 1)) == 0) static inline bool bitmap_equal(const unsigned long *src1, const unsigned long *src2, unsigned int nbits) diff --git a/tools/include/linux/mm.h b/tools/include/linux/mm.h index f3c82ab5b14c..7a6b98f4e579 100644 --- a/tools/include/linux/mm.h +++ b/tools/include/linux/mm.h @@ -2,8 +2,8 @@ #ifndef _TOOLS_LINUX_MM_H #define _TOOLS_LINUX_MM_H +#include #include -#include #define PAGE_SHIFT 12 #define PAGE_SIZE (_AC(1, UL) << PAGE_SHIFT) @@ -11,9 +11,6 @@ #define PHYS_ADDR_MAX (~(phys_addr_t)0) -#define ALIGN(x, a) __ALIGN_KERNEL((x), (a)) -#define ALIGN_DOWN(x, a) __ALIGN_KERNEL((x) - ((a) - 1), (a)) - #define PAGE_ALIGN(addr) ALIGN(addr, PAGE_SIZE) #define __va(x) ((void *)((unsigned long)(x))) From patchwork Mon Nov 13 17:37:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13454244 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C021123743; Mon, 13 Nov 2023 17:37:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="L2bVfb4r" Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6434E1BF5; Mon, 13 Nov 2023 09:37:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1699897072; x=1731433072; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=RaoJmZHtQplaKRbtgYjC8ApgqEEvGU+Ywf+tLuvU7YY=; b=L2bVfb4rxsi2MWegJt2juhaEFpGp2hvQ47oeoFgisv2ct9V6qHwarflu cg76S9FNg8OvFboyCnrYlP0NX9FO2F5uAIzceaNDiJpobNvH2URmO3en9 a2Owne4GAgS+ytzJ5XJQIPncaEohtEl9H4IWfOP1tK6bqmj/VO/NDxPfD 9rACjrDCruqLHTqzqt2PgLu8D54VuZapvkoNoHoVhuKSFcZ5ByJg99q9p bK9F/Tgmt2xm8OKS1H3z1wcrC3I/Lyz0QtgCSlLt3ca1MovjdZkS7HuGy cEXSRdJLFRnme4kZxszFB515eR71gIGQ+/HOc+//9ob/TzRC3nSzHRcvH g==; X-IronPort-AV: E=McAfee;i="6600,9927,10893"; a="370671635" X-IronPort-AV: E=Sophos;i="6.03,299,1694761200"; d="scan'208";a="370671635" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Nov 2023 09:37:52 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10893"; a="1095812724" X-IronPort-AV: E=Sophos;i="6.03,299,1694761200"; d="scan'208";a="1095812724" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmsmga005.fm.intel.com with ESMTP; 13 Nov 2023 09:37:49 -0800 From: Alexander Lobakin To: Yury Norov Cc: Alexander Lobakin , Andy Shevchenko , Rasmus Villemoes , Alexander Potapenko , Jakub Kicinski , Przemek Kitszel , netdev@vger.kernel.org, linux-btrfs@vger.kernel.org, dm-devel@redhat.com, ntfs3@lists.linux.dev, linux-s390@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 09/11] bitmap: introduce generic optimized bitmap_size() Date: Mon, 13 Nov 2023 18:37:15 +0100 Message-ID: <20231113173717.927056-10-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231113173717.927056-1-aleksander.lobakin@intel.com> References: <20231113173717.927056-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The number of times yet another open coded `BITS_TO_LONGS(nbits) * sizeof(long)` can be spotted is huge. Some generic helper is long overdue. Add one, bitmap_size(), but with one detail. BITS_TO_LONGS() uses DIV_ROUND_UP(). The latter works well when both divident and divisor are compile-time constants or when the divisor is not a pow-of-2. When it is however, the compilers sometimes tend to generate suboptimal code (GCC 13): 48 83 c0 3f add $0x3f,%rax 48 c1 e8 06 shr $0x6,%rax 48 8d 14 c5 00 00 00 00 lea 0x0(,%rax,8),%rdx %BITS_PER_LONG is always a pow-2 (either 32 or 64), but GCC still does full division of `nbits + 63` by it and then multiplication by 8. Instead of BITS_TO_LONGS(), use ALIGN() and then divide by 8. GCC: 8d 50 3f lea 0x3f(%rax),%edx c1 ea 03 shr $0x3,%edx 81 e2 f8 ff ff 1f and $0x1ffffff8,%edx Now it shifts `nbits + 63` by 3 positions (IOW performs fast division by 8) and then masks bits[2:0]. bloat-o-meter: add/remove: 0/0 grow/shrink: 20/133 up/down: 156/-773 (-617) Clang does it better and generates the same code before/after starting from -O1, except that with the ALIGN() approach it uses %edx and thus still saves some bytes: add/remove: 0/0 grow/shrink: 9/133 up/down: 18/-538 (-520) Note that we can't expand DIV_ROUND_UP() by adding a check and using this approach there, as it's used in array declarations where expressions are not allowed. Add this helper to tools/ as well. Reviewed-by: Przemek Kitszel Acked-by: Yury Norov Signed-off-by: Alexander Lobakin --- drivers/md/dm-clone-metadata.c | 5 ----- drivers/s390/cio/idset.c | 2 +- include/linux/bitmap.h | 8 +++++--- include/linux/cpumask.h | 2 +- lib/math/prime_numbers.c | 2 -- tools/include/linux/bitmap.h | 7 ++++--- 6 files changed, 11 insertions(+), 15 deletions(-) diff --git a/drivers/md/dm-clone-metadata.c b/drivers/md/dm-clone-metadata.c index c43d55672bce..47c1fa7aad8b 100644 --- a/drivers/md/dm-clone-metadata.c +++ b/drivers/md/dm-clone-metadata.c @@ -465,11 +465,6 @@ static void __destroy_persistent_data_structures(struct dm_clone_metadata *cmd) /*---------------------------------------------------------------------------*/ -static size_t bitmap_size(unsigned long nr_bits) -{ - return BITS_TO_LONGS(nr_bits) * sizeof(long); -} - static int __dirty_map_init(struct dirty_map *dmap, unsigned long nr_words, unsigned long nr_regions) { diff --git a/drivers/s390/cio/idset.c b/drivers/s390/cio/idset.c index 0a1105a483bf..e5f28370a903 100644 --- a/drivers/s390/cio/idset.c +++ b/drivers/s390/cio/idset.c @@ -18,7 +18,7 @@ struct idset { static inline unsigned long idset_bitmap_size(int num_ssid, int num_id) { - return BITS_TO_LONGS(num_ssid * num_id) * sizeof(unsigned long); + return bitmap_size(size_mul(num_ssid, num_id)); } static struct idset *idset_new(int num_ssid, int num_id) diff --git a/include/linux/bitmap.h b/include/linux/bitmap.h index 7ca0379be8c1..9a6a27a7f675 100644 --- a/include/linux/bitmap.h +++ b/include/linux/bitmap.h @@ -218,9 +218,11 @@ void bitmap_fold(unsigned long *dst, const unsigned long *orig, #define BITMAP_FIRST_WORD_MASK(start) (~0UL << ((start) & (BITS_PER_LONG - 1))) #define BITMAP_LAST_WORD_MASK(nbits) (~0UL >> (-(nbits) & (BITS_PER_LONG - 1))) +#define bitmap_size(nbits) (ALIGN(nbits, BITS_PER_LONG) / BITS_PER_BYTE) + static inline void bitmap_zero(unsigned long *dst, unsigned int nbits) { - unsigned int len = BITS_TO_LONGS(nbits) * sizeof(unsigned long); + unsigned int len = bitmap_size(nbits); if (small_const_nbits(nbits)) *dst = 0; @@ -230,7 +232,7 @@ static inline void bitmap_zero(unsigned long *dst, unsigned int nbits) static inline void bitmap_fill(unsigned long *dst, unsigned int nbits) { - unsigned int len = BITS_TO_LONGS(nbits) * sizeof(unsigned long); + unsigned int len = bitmap_size(nbits); if (small_const_nbits(nbits)) *dst = ~0UL; @@ -241,7 +243,7 @@ static inline void bitmap_fill(unsigned long *dst, unsigned int nbits) static inline void bitmap_copy(unsigned long *dst, const unsigned long *src, unsigned int nbits) { - unsigned int len = BITS_TO_LONGS(nbits) * sizeof(unsigned long); + unsigned int len = bitmap_size(nbits); if (small_const_nbits(nbits)) *dst = *src; diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h index cfb545841a2c..a2064c2a9441 100644 --- a/include/linux/cpumask.h +++ b/include/linux/cpumask.h @@ -839,7 +839,7 @@ static inline int cpulist_parse(const char *buf, struct cpumask *dstp) */ static inline unsigned int cpumask_size(void) { - return BITS_TO_LONGS(large_cpumask_bits) * sizeof(long); + return bitmap_size(large_cpumask_bits); } /* diff --git a/lib/math/prime_numbers.c b/lib/math/prime_numbers.c index d42cebf7407f..d3b64b10da1c 100644 --- a/lib/math/prime_numbers.c +++ b/lib/math/prime_numbers.c @@ -6,8 +6,6 @@ #include #include -#define bitmap_size(nbits) (BITS_TO_LONGS(nbits) * sizeof(unsigned long)) - struct primes { struct rcu_head rcu; unsigned long last, sz; diff --git a/tools/include/linux/bitmap.h b/tools/include/linux/bitmap.h index 8c6852dba04f..210c13b1b857 100644 --- a/tools/include/linux/bitmap.h +++ b/tools/include/linux/bitmap.h @@ -26,13 +26,14 @@ bool __bitmap_intersects(const unsigned long *bitmap1, #define BITMAP_FIRST_WORD_MASK(start) (~0UL << ((start) & (BITS_PER_LONG - 1))) #define BITMAP_LAST_WORD_MASK(nbits) (~0UL >> (-(nbits) & (BITS_PER_LONG - 1))) +#define bitmap_size(nbits) (ALIGN(nbits, BITS_PER_LONG) / BITS_PER_BYTE) + static inline void bitmap_zero(unsigned long *dst, unsigned int nbits) { if (small_const_nbits(nbits)) *dst = 0UL; else { - int len = BITS_TO_LONGS(nbits) * sizeof(unsigned long); - memset(dst, 0, len); + memset(dst, 0, bitmap_size(nbits)); } } @@ -84,7 +85,7 @@ static inline void bitmap_or(unsigned long *dst, const unsigned long *src1, */ static inline unsigned long *bitmap_zalloc(int nbits) { - return calloc(1, BITS_TO_LONGS(nbits) * sizeof(unsigned long)); + return calloc(1, bitmap_size(nbits)); } /* From patchwork Mon Nov 13 17:37:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13454245 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4F33F23743; Mon, 13 Nov 2023 17:38:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="H+vE/c0O" Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 83DC91981; Mon, 13 Nov 2023 09:37:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1699897075; x=1731433075; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+a6GSe4zVAmYMCDzzbGOTK0g1O4rjq1EWeIYoJFDaCM=; b=H+vE/c0OQ0gBfGP5fiDU2+RpPtq3Fe8uP8uVNinhVvPoRxgedP7B90Sn FDNj7BNnx6dYySM/dUKcZLhP2F//Q88MC69Dxu1FXr3wKP+tHGtMuFFEa vsgDJpcS/cSg4RXw/wirsC0xppKPLoi+wafJ6ZuqQ47AnNCbMbpQ6hCVJ m1YuAMt/G/P/TCjhFcYpyGJPu2TuuUELWiHa4CULRF60Ahyz+th89Vvdv XSV+x25g3uI17iJ087lwpeIpgjb+U3WUUe7MuAPqWIz6pEtOSh1ufQLhU T4jaT3EvojwNPvfjpgDwNge70PzFtuidKFK34/11U7eQ7PXCs+C6HY8Zg Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10893"; a="370671655" X-IronPort-AV: E=Sophos;i="6.03,299,1694761200"; d="scan'208";a="370671655" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Nov 2023 09:37:55 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10893"; a="1095812742" X-IronPort-AV: E=Sophos;i="6.03,299,1694761200"; d="scan'208";a="1095812742" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmsmga005.fm.intel.com with ESMTP; 13 Nov 2023 09:37:52 -0800 From: Alexander Lobakin To: Yury Norov Cc: Alexander Lobakin , Andy Shevchenko , Rasmus Villemoes , Alexander Potapenko , Jakub Kicinski , Przemek Kitszel , netdev@vger.kernel.org, linux-btrfs@vger.kernel.org, dm-devel@redhat.com, ntfs3@lists.linux.dev, linux-s390@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 10/11] bitmap: make bitmap_{get,set}_value8() use bitmap_{read,write}() Date: Mon, 13 Nov 2023 18:37:16 +0100 Message-ID: <20231113173717.927056-11-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231113173717.927056-1-aleksander.lobakin@intel.com> References: <20231113173717.927056-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Now that we have generic bitmap_read() and bitmap_write(), which are inline and try to take care of non-bound-crossing and aligned cases to keep them optimized, collapse bitmap_{get,set}_value8() into simple wrappers around the former ones. bloat-o-meter shows no difference in vmlinux and -2 bytes for gpio-pca953x.ko, which says the optimization didn't suffer due to that change. The converted helpers have the value width embedded and always compile-time constant and that helps a lot. Suggested-by: Yury Norov Signed-off-by: Alexander Lobakin --- include/linux/bitmap.h | 38 +++++--------------------------------- 1 file changed, 5 insertions(+), 33 deletions(-) diff --git a/include/linux/bitmap.h b/include/linux/bitmap.h index 9a6a27a7f675..f80e116b8f60 100644 --- a/include/linux/bitmap.h +++ b/include/linux/bitmap.h @@ -609,39 +609,6 @@ static inline void bitmap_from_u64(unsigned long *dst, u64 mask) bitmap_from_arr64(dst, &mask, 64); } -/** - * bitmap_get_value8 - get an 8-bit value within a memory region - * @map: address to the bitmap memory region - * @start: bit offset of the 8-bit value; must be a multiple of 8 - * - * Returns the 8-bit value located at the @start bit offset within the @src - * memory region. - */ -static inline unsigned long bitmap_get_value8(const unsigned long *map, - unsigned long start) -{ - const size_t index = BIT_WORD(start); - const unsigned long offset = start % BITS_PER_LONG; - - return (map[index] >> offset) & 0xFF; -} - -/** - * bitmap_set_value8 - set an 8-bit value within a memory region - * @map: address to the bitmap memory region - * @value: the 8-bit value; values wider than 8 bits may clobber bitmap - * @start: bit offset of the 8-bit value; must be a multiple of 8 - */ -static inline void bitmap_set_value8(unsigned long *map, unsigned long value, - unsigned long start) -{ - const size_t index = BIT_WORD(start); - const unsigned long offset = start % BITS_PER_LONG; - - map[index] &= ~(0xFFUL << offset); - map[index] |= value << offset; -} - /** * bitmap_read - read a value of n-bits from the memory region * @map: address to the bitmap memory region @@ -715,6 +682,11 @@ static inline void bitmap_write(unsigned long *map, unsigned long value, map[index + 1] |= (value >> space); } +#define bitmap_get_value8(map, start) \ + bitmap_read(map, start, BITS_PER_BYTE) +#define bitmap_set_value8(map, value, start) \ + bitmap_write(map, value, start, BITS_PER_BYTE) + #endif /* __ASSEMBLY__ */ #endif /* __LINUX_BITMAP_H */ From patchwork Mon Nov 13 17:37:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13454246 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 44FA6241E9; Mon, 13 Nov 2023 17:38:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="JF7lsWMJ" Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 31501211D; Mon, 13 Nov 2023 09:38:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1699897083; x=1731433083; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ZeCLSCpKtslPzs24xkgYnsE5Qt2ndE4iPRvYGoOuhBI=; b=JF7lsWMJGnB7YgCfMe8+7vewabLdMH76Ei5QajXVKcNcvO21OoZG2a3R EPzALrnXej2SVrURFuVhIc7E1CzAz8fKUCYeLvPfnjrz16sQWdKc/75ZY kfYPb9Rxkr9M+HDwv5ELxOfdIDQ4EFrGN1gsEBKdhLwLbdHKc6vp4iiTj aGV6HkEPtkrR1FZGM9FV8R1XrT45VZp0xvN7HVlgGbFI1NdRlE+kUALCX W/CvdnGy2OIn4OK2bhyXqToxgpoI+k3jUJz6y870QufLqz++sZ2tmwWaz 8h+ZT7MPjDeY2OFOeB991ZPSwNxYFXw+ZpA/O4gC9Szry6nwktgP/bChi g==; X-IronPort-AV: E=McAfee;i="6600,9927,10893"; a="370671677" X-IronPort-AV: E=Sophos;i="6.03,299,1694761200"; d="scan'208";a="370671677" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Nov 2023 09:37:58 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10893"; a="1095812752" X-IronPort-AV: E=Sophos;i="6.03,299,1694761200"; d="scan'208";a="1095812752" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmsmga005.fm.intel.com with ESMTP; 13 Nov 2023 09:37:55 -0800 From: Alexander Lobakin To: Yury Norov Cc: Alexander Lobakin , Andy Shevchenko , Rasmus Villemoes , Alexander Potapenko , Jakub Kicinski , Przemek Kitszel , netdev@vger.kernel.org, linux-btrfs@vger.kernel.org, dm-devel@redhat.com, ntfs3@lists.linux.dev, linux-s390@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 11/11] lib/bitmap: add compile-time test for __assign_bit() optimization Date: Mon, 13 Nov 2023 18:37:17 +0100 Message-ID: <20231113173717.927056-12-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231113173717.927056-1-aleksander.lobakin@intel.com> References: <20231113173717.927056-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Commit dc34d5036692 ("lib: test_bitmap: add compile-time optimization/evaluations assertions") initially missed __assign_bit(), which led to that quite a time passed before I realized it doesn't get optimized at compilation time. Now that it does, add test for that just to make sure nothing will break one day. To make things more interesting, use bitmap_complement() and bitmap_full(), thus checking their compile-time evaluation as well. And remove the misleading comment mentioning the workaround removed recently in favor of adding the whole file to GCov exceptions. Reviewed-by: Przemek Kitszel Signed-off-by: Alexander Lobakin --- lib/test_bitmap.c | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) diff --git a/lib/test_bitmap.c b/lib/test_bitmap.c index a6e92cf5266a..4ee1f8ceb51d 100644 --- a/lib/test_bitmap.c +++ b/lib/test_bitmap.c @@ -1204,14 +1204,7 @@ static void __init test_bitmap_const_eval(void) * in runtime. */ - /* - * Equals to `unsigned long bitmap[1] = { GENMASK(6, 5), }`. - * Clang on s390 optimizes bitops at compile-time as intended, but at - * the same time stops treating @bitmap and @bitopvar as compile-time - * constants after regular test_bit() is executed, thus triggering the - * build bugs below. So, call const_test_bit() there directly until - * the compiler is fixed. - */ + /* Equals to `unsigned long bitmap[1] = { GENMASK(6, 5), }` */ bitmap_clear(bitmap, 0, BITS_PER_LONG); if (!test_bit(7, bitmap)) bitmap_set(bitmap, 5, 2); @@ -1243,6 +1236,15 @@ static void __init test_bitmap_const_eval(void) /* ~BIT(25) */ BUILD_BUG_ON(!__builtin_constant_p(~var)); BUILD_BUG_ON(~var != ~BIT(25)); + + /* ~BIT(25) | BIT(25) == ~0UL */ + bitmap_complement(&var, &var, BITS_PER_LONG); + __assign_bit(25, &var, true); + + /* !(~(~0UL)) == 1 */ + res = bitmap_full(&var, BITS_PER_LONG); + BUILD_BUG_ON(!__builtin_constant_p(res)); + BUILD_BUG_ON(!res); } /*