From patchwork Mon Oct 16 16:52:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13423849 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B697CCDB482 for ; Mon, 16 Oct 2023 16:57:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234569AbjJPQ5b (ORCPT ); Mon, 16 Oct 2023 12:57:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60886 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233659AbjJPQ4s (ORCPT ); Mon, 16 Oct 2023 12:56:48 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C20D849FE; Mon, 16 Oct 2023 09:54:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697475271; x=1729011271; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6Yna5gTXo7UZceq8nYgGY6hXKf64iK95iaaag7rQQik=; b=Nfe2iKxBpUkrq9tv4/tc3cn3WYL1rgYpKuYqWrO7VURvBEe27Cld9jiW y+UqJAR84oMtj9Sf5OtLxyyv0Cvuvascb2+2nN3om86zK7P6AqgaA37X6 kf5qYke1aRBYsd8IcA6gvMSwA3TgB2yQKmvzYoCrNT2vAQeeruUljCsze pGG+Xn+VZG9Hdcu6+rxkSzU6DZ7xHuiSbe5PsoHud61bBVk7DryVLy/nw FMxD42jViF9mhBlSIk+nIGVPqySM2cuC0uGKdH1aqUb7Yvtnw+uJBkumC q/R3bzkNGtxyQowKrrrwWAtixwC69QcIueWjDxYBish1EVn8U+Z6WvZgI w==; X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="364937108" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="364937108" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2023 09:54:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="826084041" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="826084041" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmsmga004.fm.intel.com with ESMTP; 16 Oct 2023 09:54:27 -0700 From: Alexander Lobakin To: Yury Norov Cc: Alexander Lobakin , Andy Shevchenko , Rasmus Villemoes , Alexander Potapenko , Jakub Kicinski , Eric Dumazet , David Ahern , Przemek Kitszel , Simon Horman , netdev@vger.kernel.org, linux-btrfs@vger.kernel.org, dm-devel@redhat.com, ntfs3@lists.linux.dev, linux-s390@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 04/13] linkmode: convert linkmode_{test,set,clear,mod}_bit() to macros Date: Mon, 16 Oct 2023 18:52:38 +0200 Message-ID: <20231016165247.14212-5-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231016165247.14212-1-aleksander.lobakin@intel.com> References: <20231016165247.14212-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Since commit b03fc1173c0c ("bitops: let optimize out non-atomic bitops on compile-time constants"), the non-atomic bitops are macros which can be expanded by the compilers into compile-time expressions, which will result in better optimized object code. Unfortunately, turned out that passing `volatile` to those macros discards any possibility of optimization, as the compilers then don't even try to look whether the passed bitmap is known at compilation time. In addition to that, the mentioned linkmode helpers are marked with `inline`, not `__always_inline`, meaning that it's not guaranteed some compiler won't uninline them for no reason, which will also effectively prevent them from being optimized (it's a well-known thing the compilers sometimes uninline `2 + 2`). Convert linkmode_*_bit() from inlines to macros. Their calling convention are 1:1 with the corresponding bitops, so that it's not even needed to enumerate and map the arguments, only the names. No changes in vmlinux' object code (compiled by LLVM for x86_64) whatsoever, but that doesn't necessarily means the change is meaningless. Reviewed-by: Przemek Kitszel Signed-off-by: Alexander Lobakin Acked-by: Jakub Kicinski --- include/linux/linkmode.h | 27 ++++----------------------- 1 file changed, 4 insertions(+), 23 deletions(-) diff --git a/include/linux/linkmode.h b/include/linux/linkmode.h index 15e0e0209da4..f231e2edbfa5 100644 --- a/include/linux/linkmode.h +++ b/include/linux/linkmode.h @@ -38,10 +38,10 @@ static inline int linkmode_andnot(unsigned long *dst, const unsigned long *src1, return bitmap_andnot(dst, src1, src2, __ETHTOOL_LINK_MODE_MASK_NBITS); } -static inline void linkmode_set_bit(int nr, volatile unsigned long *addr) -{ - __set_bit(nr, addr); -} +#define linkmode_test_bit test_bit +#define linkmode_set_bit __set_bit +#define linkmode_clear_bit __clear_bit +#define linkmode_mod_bit __assign_bit static inline void linkmode_set_bit_array(const int *array, int array_size, unsigned long *addr) @@ -52,25 +52,6 @@ static inline void linkmode_set_bit_array(const int *array, int array_size, linkmode_set_bit(array[i], addr); } -static inline void linkmode_clear_bit(int nr, volatile unsigned long *addr) -{ - __clear_bit(nr, addr); -} - -static inline void linkmode_mod_bit(int nr, volatile unsigned long *addr, - int set) -{ - if (set) - linkmode_set_bit(nr, addr); - else - linkmode_clear_bit(nr, addr); -} - -static inline int linkmode_test_bit(int nr, const volatile unsigned long *addr) -{ - return test_bit(nr, addr); -} - static inline int linkmode_equal(const unsigned long *src1, const unsigned long *src2) {