From patchwork Wed Dec 6 01:01:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jesse Brandeburg X-Patchwork-Id: 13480909 X-Patchwork-Delegate: kuba@kernel.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="V1k/7oBb" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8C789181 for ; Tue, 5 Dec 2023 17:01:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1701824494; x=1733360494; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=bssGAbRo6EZ7uH9j8e/gpjKd/Bpnhc8jnprKFrUZrJs=; b=V1k/7oBbAlr/1GjEVBSDIHBCwMxOf0mk0Pzby+igDoVd0Ejt6pWLYucX /Q+5zT8jW7c2CncglrTuBKu9PWfOZ62O5kWQj8PQ95ctRZj2vL4oUWiZK PkhPOrS7s0RgAuaMP4zQy6n5vvbG6IO3A8iQ/abG7lSRGJQ4cYYasT6pT +CriDQCZKgwNvJOl1B4Ghk1iDkqaIewKbpLsioOsZN+E7fBEXXGBcVIrR ECoA0RWdKObq7yc2GfK8VlqDvC0Ekc38p+nf6AeLcxx6b6n5iN+yP6jm0 11cQTHyY5HzuyXupufn55X1h3BDKH96uaNhMCCr7hHlrIHGZQutapMyx4 Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10915"; a="12700275" X-IronPort-AV: E=Sophos;i="6.04,254,1695711600"; d="scan'208";a="12700275" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Dec 2023 17:01:32 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10915"; a="841655229" X-IronPort-AV: E=Sophos;i="6.04,254,1695711600"; d="scan'208";a="841655229" Received: from jbrandeb-spr1.jf.intel.com ([10.166.28.233]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Dec 2023 17:01:31 -0800 From: Jesse Brandeburg To: intel-wired-lan@lists.osuosl.org Cc: Jesse Brandeburg , netdev@vger.kernel.org, aleksander.lobakin@intel.com, przemyslaw.kitszel@intel.com, horms@kernel.org, marcin.szycik@linux.intel.com Subject: [PATCH iwl-next v2 01/15] e1000e: make lost bits explicit Date: Tue, 5 Dec 2023 17:01:00 -0800 Message-Id: <20231206010114.2259388-2-jesse.brandeburg@intel.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20231206010114.2259388-1-jesse.brandeburg@intel.com> References: <20231206010114.2259388-1-jesse.brandeburg@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org For more than 15 years this code has passed in a request for a page and masked off that page when read/writing. This code has been here forever, but FIELD_PREP finds the bug when converted to use it. Change the code to do exactly the same thing but allow the conversion to FIELD_PREP in a later patch. To make it clear what we lost when making this change I left a comment, but there is no point to change the code to generate a correct sequence at this point. This is not a Fixes tagged patch on purpose because it doesn't change the binary output. Reviewed-by: Marcin Szycik Signed-off-by: Jesse Brandeburg Tested-by: Naama Meir Reviewed-by: Simon Horman --- drivers/net/ethernet/intel/e1000e/80003es2lan.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/intel/e1000e/80003es2lan.c b/drivers/net/ethernet/intel/e1000e/80003es2lan.c index be9c695dde12..74671201208e 100644 --- a/drivers/net/ethernet/intel/e1000e/80003es2lan.c +++ b/drivers/net/ethernet/intel/e1000e/80003es2lan.c @@ -1035,17 +1035,18 @@ static s32 e1000_setup_copper_link_80003es2lan(struct e1000_hw *hw) * iteration and increase the max iterations when * polling the phy; this fixes erroneous timeouts at 10Mbps. */ - ret_val = e1000_write_kmrn_reg_80003es2lan(hw, GG82563_REG(0x34, 4), - 0xFFFF); + /* these next three accesses were always meant to use page 0x34 using + * GG82563_REG(0x34, N) but never did, so we've just corrected the call + * to not drop bits + */ + ret_val = e1000_write_kmrn_reg_80003es2lan(hw, 4, 0xFFFF); if (ret_val) return ret_val; - ret_val = e1000_read_kmrn_reg_80003es2lan(hw, GG82563_REG(0x34, 9), - ®_data); + ret_val = e1000_read_kmrn_reg_80003es2lan(hw, 9, ®_data); if (ret_val) return ret_val; reg_data |= 0x3F; - ret_val = e1000_write_kmrn_reg_80003es2lan(hw, GG82563_REG(0x34, 9), - reg_data); + ret_val = e1000_write_kmrn_reg_80003es2lan(hw, 9, reg_data); if (ret_val) return ret_val; ret_val = From patchwork Wed Dec 6 01:01:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jesse Brandeburg X-Patchwork-Id: 13480912 X-Patchwork-Delegate: kuba@kernel.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="hB5it7pL" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A4A7F1A4 for ; Tue, 5 Dec 2023 17:01:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1701824494; x=1733360494; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=YhWRBTHSOQ27ywOCaCv/yDPG0uywhWxKEWH20W0l27Y=; b=hB5it7pLL/4+L/tOjvMAjVALY5vHXfpjuJmEmtKYE/JgkpO6gHRYzGDk 5AMMBh+SINVxfY3phaQPlSlx1kxvUUnCt0+A+3n/5Drzo649MiSeGKEGk LkS8+xd2fc2+A9IiMi0l5iL8H8U/INr/xHFpxYDRPMdNeL4ti5pfSgEmj CP3yQSTS0NQgw2UJmkfS+FJxgS4sskZg+eaBZyemzYeKf1IDUlb6QnTOO Za6lkFRtpbmMOYCsUsc75FgHwSriFSXknRBXEOSGJuzrUSliVMblAPhIh yVPy7xEDyaqWY8Q0ksGewEfuTZvToak1Xn9N1ee6Bo77oNPJEwbGaxIRg w==; X-IronPort-AV: E=McAfee;i="6600,9927,10915"; a="12700277" X-IronPort-AV: E=Sophos;i="6.04,254,1695711600"; d="scan'208";a="12700277" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Dec 2023 17:01:32 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10915"; a="841655232" X-IronPort-AV: E=Sophos;i="6.04,254,1695711600"; d="scan'208";a="841655232" Received: from jbrandeb-spr1.jf.intel.com ([10.166.28.233]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Dec 2023 17:01:31 -0800 From: Jesse Brandeburg To: intel-wired-lan@lists.osuosl.org Cc: Jesse Brandeburg , netdev@vger.kernel.org, aleksander.lobakin@intel.com, przemyslaw.kitszel@intel.com, horms@kernel.org, marcin.szycik@linux.intel.com Subject: [PATCH iwl-next v2 02/15] intel: add bit macro includes where needed Date: Tue, 5 Dec 2023 17:01:01 -0800 Message-Id: <20231206010114.2259388-3-jesse.brandeburg@intel.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20231206010114.2259388-1-jesse.brandeburg@intel.com> References: <20231206010114.2259388-1-jesse.brandeburg@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org This series is introducing the use of FIELD_GET and FIELD_PREP which requires bitfield.h to be included. Fix all the includes in this one change, and rearrange includes into alphabetical order to ease readability and future maintenance. virtchnl.h and it's usage was modified to have it's own includes as it should. This required including bits.h for virtchnl.h. Reviewed-by: Marcin Szycik Signed-off-by: Jesse Brandeburg Reviewed-by: Simon Horman --- drivers/net/ethernet/intel/e1000/e1000_hw.c | 1 + drivers/net/ethernet/intel/fm10k/fm10k_pf.c | 1 + drivers/net/ethernet/intel/fm10k/fm10k_vf.c | 1 + drivers/net/ethernet/intel/i40e/i40e_common.c | 1 + drivers/net/ethernet/intel/i40e/i40e_dcb.c | 2 ++ drivers/net/ethernet/intel/i40e/i40e_nvm.c | 1 + drivers/net/ethernet/intel/iavf/iavf_common.c | 3 +- .../net/ethernet/intel/iavf/iavf_ethtool.c | 5 ++-- drivers/net/ethernet/intel/iavf/iavf_fdir.c | 1 + drivers/net/ethernet/intel/iavf/iavf_txrx.c | 1 + drivers/net/ethernet/intel/igb/e1000_i210.c | 4 +-- drivers/net/ethernet/intel/igb/e1000_nvm.c | 4 +-- drivers/net/ethernet/intel/igb/e1000_phy.c | 4 +-- drivers/net/ethernet/intel/igbvf/netdev.c | 28 +++++++++---------- drivers/net/ethernet/intel/igc/igc_i225.c | 1 + drivers/net/ethernet/intel/igc/igc_phy.c | 1 + include/linux/avf/virtchnl.h | 1 + 17 files changed, 37 insertions(+), 23 deletions(-) diff --git a/drivers/net/ethernet/intel/e1000/e1000_hw.c b/drivers/net/ethernet/intel/e1000/e1000_hw.c index 4542e2bc28e8..4576511c99f5 100644 --- a/drivers/net/ethernet/intel/e1000/e1000_hw.c +++ b/drivers/net/ethernet/intel/e1000/e1000_hw.c @@ -5,6 +5,7 @@ * Shared functions for accessing and configuring the MAC */ +#include #include "e1000.h" static s32 e1000_check_downshift(struct e1000_hw *hw); diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_pf.c b/drivers/net/ethernet/intel/fm10k/fm10k_pf.c index af1b0cde3670..ae700a1807c6 100644 --- a/drivers/net/ethernet/intel/fm10k/fm10k_pf.c +++ b/drivers/net/ethernet/intel/fm10k/fm10k_pf.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 /* Copyright(c) 2013 - 2019 Intel Corporation. */ +#include #include "fm10k_pf.h" #include "fm10k_vf.h" diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_vf.c b/drivers/net/ethernet/intel/fm10k/fm10k_vf.c index dc8ccd378ec9..c50928ec14ff 100644 --- a/drivers/net/ethernet/intel/fm10k/fm10k_vf.c +++ b/drivers/net/ethernet/intel/fm10k/fm10k_vf.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 /* Copyright(c) 2013 - 2019 Intel Corporation. */ +#include #include "fm10k_vf.h" /** diff --git a/drivers/net/ethernet/intel/i40e/i40e_common.c b/drivers/net/ethernet/intel/i40e/i40e_common.c index bd52b73cf61f..522cf2e5f365 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_common.c +++ b/drivers/net/ethernet/intel/i40e/i40e_common.c @@ -2,6 +2,7 @@ /* Copyright(c) 2013 - 2021 Intel Corporation. */ #include +#include #include #include #include diff --git a/drivers/net/ethernet/intel/i40e/i40e_dcb.c b/drivers/net/ethernet/intel/i40e/i40e_dcb.c index 498728e16a37..0d334036ab8b 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_dcb.c +++ b/drivers/net/ethernet/intel/i40e/i40e_dcb.c @@ -1,6 +1,8 @@ // SPDX-License-Identifier: GPL-2.0 /* Copyright(c) 2013 - 2021 Intel Corporation. */ +#include +#include "i40e_adminq.h" #include "i40e_alloc.h" #include "i40e_dcb.h" #include "i40e_prototype.h" diff --git a/drivers/net/ethernet/intel/i40e/i40e_nvm.c b/drivers/net/ethernet/intel/i40e/i40e_nvm.c index bebf9d4e9068..70215ae92b0c 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_nvm.c +++ b/drivers/net/ethernet/intel/i40e/i40e_nvm.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 /* Copyright(c) 2013 - 2018 Intel Corporation. */ +#include #include #include "i40e_alloc.h" #include "i40e_prototype.h" diff --git a/drivers/net/ethernet/intel/iavf/iavf_common.c b/drivers/net/ethernet/intel/iavf/iavf_common.c index 89d2bce529ae..af5cc69f26e3 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_common.c +++ b/drivers/net/ethernet/intel/iavf/iavf_common.c @@ -1,10 +1,11 @@ // SPDX-License-Identifier: GPL-2.0 /* Copyright(c) 2013 - 2018 Intel Corporation. */ +#include +#include #include "iavf_type.h" #include "iavf_adminq.h" #include "iavf_prototype.h" -#include /** * iavf_aq_str - convert AQ err code to a string diff --git a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c index 7e58a578f3d4..11150bdc63d0 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c +++ b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c @@ -1,11 +1,12 @@ // SPDX-License-Identifier: GPL-2.0 /* Copyright(c) 2013 - 2018 Intel Corporation. */ +#include +#include + /* ethtool support for iavf */ #include "iavf.h" -#include - /* ethtool statistics helpers */ /** diff --git a/drivers/net/ethernet/intel/iavf/iavf_fdir.c b/drivers/net/ethernet/intel/iavf/iavf_fdir.c index 03e774bd2a5b..65ddcd81c993 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_fdir.c +++ b/drivers/net/ethernet/intel/iavf/iavf_fdir.c @@ -3,6 +3,7 @@ /* flow director ethtool support for iavf */ +#include #include "iavf.h" #define GTPU_PORT 2152 diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c index d64c4997136b..fb7edba9c2f8 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 /* Copyright(c) 2013 - 2018 Intel Corporation. */ +#include #include #include "iavf.h" diff --git a/drivers/net/ethernet/intel/igb/e1000_i210.c b/drivers/net/ethernet/intel/igb/e1000_i210.c index b9b9d35494d2..53b396fd194a 100644 --- a/drivers/net/ethernet/intel/igb/e1000_i210.c +++ b/drivers/net/ethernet/intel/igb/e1000_i210.c @@ -5,9 +5,9 @@ * e1000_i211 */ -#include +#include #include - +#include #include "e1000_hw.h" #include "e1000_i210.h" diff --git a/drivers/net/ethernet/intel/igb/e1000_nvm.c b/drivers/net/ethernet/intel/igb/e1000_nvm.c index fa136e6e9328..0da57e89593a 100644 --- a/drivers/net/ethernet/intel/igb/e1000_nvm.c +++ b/drivers/net/ethernet/intel/igb/e1000_nvm.c @@ -1,9 +1,9 @@ // SPDX-License-Identifier: GPL-2.0 /* Copyright(c) 2007 - 2018 Intel Corporation. */ -#include +#include #include - +#include #include "e1000_mac.h" #include "e1000_nvm.h" diff --git a/drivers/net/ethernet/intel/igb/e1000_phy.c b/drivers/net/ethernet/intel/igb/e1000_phy.c index a018000f7db9..3c1b562a3271 100644 --- a/drivers/net/ethernet/intel/igb/e1000_phy.c +++ b/drivers/net/ethernet/intel/igb/e1000_phy.c @@ -1,9 +1,9 @@ // SPDX-License-Identifier: GPL-2.0 /* Copyright(c) 2007 - 2018 Intel Corporation. */ -#include +#include #include - +#include #include "e1000_mac.h" #include "e1000_phy.h" diff --git a/drivers/net/ethernet/intel/igbvf/netdev.c b/drivers/net/ethernet/intel/igbvf/netdev.c index fd712585af27..e6c1fbee049e 100644 --- a/drivers/net/ethernet/intel/igbvf/netdev.c +++ b/drivers/net/ethernet/intel/igbvf/netdev.c @@ -3,25 +3,25 @@ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt -#include -#include -#include -#include -#include -#include +#include #include -#include -#include -#include -#include -#include -#include -#include #include #include +#include +#include +#include +#include +#include +#include +#include #include #include - +#include +#include +#include +#include +#include +#include #include "igbvf.h" char igbvf_driver_name[] = "igbvf"; diff --git a/drivers/net/ethernet/intel/igc/igc_i225.c b/drivers/net/ethernet/intel/igc/igc_i225.c index 17546a035ab1..d2562c8e8015 100644 --- a/drivers/net/ethernet/intel/igc/igc_i225.c +++ b/drivers/net/ethernet/intel/igc/igc_i225.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 /* Copyright (c) 2018 Intel Corporation */ +#include #include #include "igc_hw.h" diff --git a/drivers/net/ethernet/intel/igc/igc_phy.c b/drivers/net/ethernet/intel/igc/igc_phy.c index 53b77c969c85..d0d9e7170154 100644 --- a/drivers/net/ethernet/intel/igc/igc_phy.c +++ b/drivers/net/ethernet/intel/igc/igc_phy.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 /* Copyright (c) 2018 Intel Corporation */ +#include #include "igc_phy.h" /** diff --git a/include/linux/avf/virtchnl.h b/include/linux/avf/virtchnl.h index 6b3acf15be5c..99ae7960a8d1 100644 --- a/include/linux/avf/virtchnl.h +++ b/include/linux/avf/virtchnl.h @@ -5,6 +5,7 @@ #define _VIRTCHNL_H_ #include +#include #include #include From patchwork Wed Dec 6 01:01:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jesse Brandeburg X-Patchwork-Id: 13480911 X-Patchwork-Delegate: kuba@kernel.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="aQihTynR" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8E1AB1AA for ; Tue, 5 Dec 2023 17:01:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1701824495; x=1733360495; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=iw6yro3/LMCZEEhj2IEYsH6Nqa5hJW1h7JIi8Ii0uUA=; b=aQihTynRgsm4KZDNokmkuvjiw8sIxOFivW4LQ1O5lEdzxIK3hItTgqys yoFzuH8HhaQPIp6i9NZSqvpPkPvnVFfmYGj3/KXnCo1d03xkcwbre2/Hi GhD+wAYWzOMdUrCt2ukNgmRtb5N9ExIfrl2p/DoIujm35/mFNA3mFESX4 T++0kit1cJkbFAV9cOBgB8T98jxGVIuRdHgR6cTqQjJjEcT7EonuOBr/5 9c1xdmuNgui+WSKQVWDyis3dCqGl7xOXegNXB/pmM4ej65oWDWEvgpXM1 iUD97K0hysmgVMUKgKB2hxNaa4F1X+v4setQwNAwlM1WNEqMy+kdUHi7+ Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10915"; a="12700280" X-IronPort-AV: E=Sophos;i="6.04,254,1695711600"; d="scan'208";a="12700280" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Dec 2023 17:01:33 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10915"; a="841655237" X-IronPort-AV: E=Sophos;i="6.04,254,1695711600"; d="scan'208";a="841655237" Received: from jbrandeb-spr1.jf.intel.com ([10.166.28.233]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Dec 2023 17:01:31 -0800 From: Jesse Brandeburg To: intel-wired-lan@lists.osuosl.org Cc: Jesse Brandeburg , netdev@vger.kernel.org, aleksander.lobakin@intel.com, przemyslaw.kitszel@intel.com, horms@kernel.org, marcin.szycik@linux.intel.com, Julia Lawall Subject: [PATCH iwl-next v2 03/15] intel: legacy: field prep conversion Date: Tue, 5 Dec 2023 17:01:02 -0800 Message-Id: <20231206010114.2259388-4-jesse.brandeburg@intel.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20231206010114.2259388-1-jesse.brandeburg@intel.com> References: <20231206010114.2259388-1-jesse.brandeburg@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Refactor several older Intel drivers to use FIELD_PREP(), which reduces lines of code and adds clarity of intent. This code was generated by the following coccinelle/spatch script and then manually repaired. @prep2@ constant shift,mask; type T; expression a; @@ -(((T)(a) << shift) & mask) +FIELD_PREP(mask, a) @prep@ constant shift,mask; type T; expression a; @@ -((T)((a) << shift) & mask) +FIELD_PREP(mask, a) Cc: Julia Lawall Reviewed-by: Marcin Szycik Reviewed-by: Simon Horman Signed-off-by: Jesse Brandeburg Tested-by: Pucha Himasekhar Reddy (A Contingent worker at Intel) --- v2: updated commit message with new script --- drivers/net/ethernet/intel/e1000e/80003es2lan.c | 7 +++---- drivers/net/ethernet/intel/e1000e/phy.c | 7 +++---- drivers/net/ethernet/intel/fm10k/fm10k_pf.c | 3 +-- drivers/net/ethernet/intel/igb/e1000_phy.c | 4 ++-- drivers/net/ethernet/intel/igb/igb_ethtool.c | 3 +-- drivers/net/ethernet/intel/igb/igb_main.c | 9 +++------ drivers/net/ethernet/intel/ixgbe/ixgbe_82598.c | 2 +- drivers/net/ethernet/intel/ixgbe/ixgbe_fcoe.c | 4 ++-- 8 files changed, 16 insertions(+), 23 deletions(-) diff --git a/drivers/net/ethernet/intel/e1000e/80003es2lan.c b/drivers/net/ethernet/intel/e1000e/80003es2lan.c index 74671201208e..31fce3e4e8af 100644 --- a/drivers/net/ethernet/intel/e1000e/80003es2lan.c +++ b/drivers/net/ethernet/intel/e1000e/80003es2lan.c @@ -1210,8 +1210,8 @@ static s32 e1000_read_kmrn_reg_80003es2lan(struct e1000_hw *hw, u32 offset, if (ret_val) return ret_val; - kmrnctrlsta = ((offset << E1000_KMRNCTRLSTA_OFFSET_SHIFT) & - E1000_KMRNCTRLSTA_OFFSET) | E1000_KMRNCTRLSTA_REN; + kmrnctrlsta = FIELD_PREP(E1000_KMRNCTRLSTA_OFFSET, offset) | + E1000_KMRNCTRLSTA_REN; ew32(KMRNCTRLSTA, kmrnctrlsta); e1e_flush(); @@ -1245,8 +1245,7 @@ static s32 e1000_write_kmrn_reg_80003es2lan(struct e1000_hw *hw, u32 offset, if (ret_val) return ret_val; - kmrnctrlsta = ((offset << E1000_KMRNCTRLSTA_OFFSET_SHIFT) & - E1000_KMRNCTRLSTA_OFFSET) | data; + kmrnctrlsta = FIELD_PREP(E1000_KMRNCTRLSTA_OFFSET, offset) | data; ew32(KMRNCTRLSTA, kmrnctrlsta); e1e_flush(); diff --git a/drivers/net/ethernet/intel/e1000e/phy.c b/drivers/net/ethernet/intel/e1000e/phy.c index 08c3d477dd6f..2498f021eb02 100644 --- a/drivers/net/ethernet/intel/e1000e/phy.c +++ b/drivers/net/ethernet/intel/e1000e/phy.c @@ -463,8 +463,8 @@ static s32 __e1000_read_kmrn_reg(struct e1000_hw *hw, u32 offset, u16 *data, return ret_val; } - kmrnctrlsta = ((offset << E1000_KMRNCTRLSTA_OFFSET_SHIFT) & - E1000_KMRNCTRLSTA_OFFSET) | E1000_KMRNCTRLSTA_REN; + kmrnctrlsta = FIELD_PREP(E1000_KMRNCTRLSTA_OFFSET, offset) | + E1000_KMRNCTRLSTA_REN; ew32(KMRNCTRLSTA, kmrnctrlsta); e1e_flush(); @@ -536,8 +536,7 @@ static s32 __e1000_write_kmrn_reg(struct e1000_hw *hw, u32 offset, u16 data, return ret_val; } - kmrnctrlsta = ((offset << E1000_KMRNCTRLSTA_OFFSET_SHIFT) & - E1000_KMRNCTRLSTA_OFFSET) | data; + kmrnctrlsta = FIELD_PREP(E1000_KMRNCTRLSTA_OFFSET, offset) | data; ew32(KMRNCTRLSTA, kmrnctrlsta); e1e_flush(); diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_pf.c b/drivers/net/ethernet/intel/fm10k/fm10k_pf.c index ae700a1807c6..1eea0ec5dbcf 100644 --- a/drivers/net/ethernet/intel/fm10k/fm10k_pf.c +++ b/drivers/net/ethernet/intel/fm10k/fm10k_pf.c @@ -866,8 +866,7 @@ static s32 fm10k_iov_assign_default_mac_vlan_pf(struct fm10k_hw *hw, * register is RO from the VF, so the PF must do this even in the * case of notifying the VF of a new VID via the mailbox. */ - txqctl = ((u32)vf_vid << FM10K_TXQCTL_VID_SHIFT) & - FM10K_TXQCTL_VID_MASK; + txqctl = FIELD_PREP(FM10K_TXQCTL_VID_MASK, vf_vid); txqctl |= (vf_idx << FM10K_TXQCTL_TC_SHIFT) | FM10K_TXQCTL_VF | vf_idx; diff --git a/drivers/net/ethernet/intel/igb/e1000_phy.c b/drivers/net/ethernet/intel/igb/e1000_phy.c index 3c1b562a3271..c84e7356cdb1 100644 --- a/drivers/net/ethernet/intel/igb/e1000_phy.c +++ b/drivers/net/ethernet/intel/igb/e1000_phy.c @@ -255,7 +255,7 @@ s32 igb_read_phy_reg_i2c(struct e1000_hw *hw, u32 offset, u16 *data) } /* Need to byte-swap the 16-bit value. */ - *data = ((i2ccmd >> 8) & 0x00FF) | ((i2ccmd << 8) & 0xFF00); + *data = ((i2ccmd >> 8) & 0x00FF) | FIELD_PREP(0xFF00, i2ccmd); return 0; } @@ -282,7 +282,7 @@ s32 igb_write_phy_reg_i2c(struct e1000_hw *hw, u32 offset, u16 data) } /* Swap the data bytes for the I2C interface */ - phy_data_swapped = ((data >> 8) & 0x00FF) | ((data << 8) & 0xFF00); + phy_data_swapped = ((data >> 8) & 0x00FF) | FIELD_PREP(0xFF00, data); /* Set up Op-code, Phy Address, and register address in the I2CCMD * register. The MAC will take care of interfacing with the diff --git a/drivers/net/ethernet/intel/igb/igb_ethtool.c b/drivers/net/ethernet/intel/igb/igb_ethtool.c index 16d2a55d5e17..f03977f2323e 100644 --- a/drivers/net/ethernet/intel/igb/igb_ethtool.c +++ b/drivers/net/ethernet/intel/igb/igb_ethtool.c @@ -2713,8 +2713,7 @@ static int igb_rxnfc_write_etype_filter(struct igb_adapter *adapter, etqf |= (etype & E1000_ETQF_ETYPE_MASK); etqf &= ~E1000_ETQF_QUEUE_MASK; - etqf |= ((input->action << E1000_ETQF_QUEUE_SHIFT) - & E1000_ETQF_QUEUE_MASK); + etqf |= FIELD_PREP(E1000_ETQF_QUEUE_MASK, input->action); etqf |= E1000_ETQF_QUEUE_ENABLE; wr32(E1000_ETQF(i), etqf); diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c index b2295caa2f0a..897eb36bb609 100644 --- a/drivers/net/ethernet/intel/igb/igb_main.c +++ b/drivers/net/ethernet/intel/igb/igb_main.c @@ -9810,8 +9810,7 @@ static void igb_set_vf_rate_limit(struct e1000_hw *hw, int vf, int tx_rate, tx_rate; bcnrc_val = E1000_RTTBCNRC_RS_ENA; - bcnrc_val |= ((rf_int << E1000_RTTBCNRC_RF_INT_SHIFT) & - E1000_RTTBCNRC_RF_INT_MASK); + bcnrc_val |= FIELD_PREP(E1000_RTTBCNRC_RF_INT_MASK, rf_int); bcnrc_val |= (rf_dec & E1000_RTTBCNRC_RF_DEC_MASK); } else { bcnrc_val = 0; @@ -10000,8 +9999,7 @@ static void igb_init_dmac(struct igb_adapter *adapter, u32 pba) hwm = 64 * (pba - 6); reg = rd32(E1000_FCRTC); reg &= ~E1000_FCRTC_RTH_COAL_MASK; - reg |= ((hwm << E1000_FCRTC_RTH_COAL_SHIFT) - & E1000_FCRTC_RTH_COAL_MASK); + reg |= FIELD_PREP(E1000_FCRTC_RTH_COAL_MASK, hwm); wr32(E1000_FCRTC, reg); /* Set the DMA Coalescing Rx threshold to PBA - 2 * max @@ -10010,8 +10008,7 @@ static void igb_init_dmac(struct igb_adapter *adapter, u32 pba) dmac_thr = pba - 10; reg = rd32(E1000_DMACR); reg &= ~E1000_DMACR_DMACTHR_MASK; - reg |= ((dmac_thr << E1000_DMACR_DMACTHR_SHIFT) - & E1000_DMACR_DMACTHR_MASK); + reg |= FIELD_PREP(E1000_DMACR_DMACTHR_MASK, dmac_thr); /* transition to L0x or L1 if available..*/ reg |= (E1000_DMACR_DMAC_EN | E1000_DMACR_DMAC_LX_MASK); diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_82598.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_82598.c index 100388968e4d..0470b69d834c 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_82598.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_82598.c @@ -794,7 +794,7 @@ static s32 ixgbe_set_vmdq_82598(struct ixgbe_hw *hw, u32 rar, u32 vmdq) rar_high = IXGBE_READ_REG(hw, IXGBE_RAH(rar)); rar_high &= ~IXGBE_RAH_VIND_MASK; - rar_high |= ((vmdq << IXGBE_RAH_VIND_SHIFT) & IXGBE_RAH_VIND_MASK); + rar_high |= FIELD_PREP(IXGBE_RAH_VIND_MASK, vmdq); IXGBE_WRITE_REG(hw, IXGBE_RAH(rar), rar_high); return 0; } diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_fcoe.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_fcoe.c index 7311bd545acf..18d63c8c2ff4 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_fcoe.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_fcoe.c @@ -670,8 +670,8 @@ void ixgbe_configure_fcoe(struct ixgbe_adapter *adapter) int fcoe_i_h = fcoe->offset + ((i + fcreta_size) % fcoe->indices); fcoe_q_h = adapter->rx_ring[fcoe_i_h]->reg_idx; - fcoe_q_h = (fcoe_q_h << IXGBE_FCRETA_ENTRY_HIGH_SHIFT) & - IXGBE_FCRETA_ENTRY_HIGH_MASK; + fcoe_q_h = FIELD_PREP(IXGBE_FCRETA_ENTRY_HIGH_MASK, + fcoe_q_h); } fcoe_i = fcoe->offset + (i % fcoe->indices); From patchwork Wed Dec 6 01:01:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jesse Brandeburg X-Patchwork-Id: 13480916 X-Patchwork-Delegate: kuba@kernel.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="OmdLzyiX" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A9E951B1 for ; Tue, 5 Dec 2023 17:01:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1701824495; x=1733360495; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/XqGGgfH5cHp+Y3RVlSphXqkPtjSwwe5KDXxeUocdn0=; b=OmdLzyiXgNtZUGi8k6XKH68Qlbau/FC5m3r7twue0fC/ypTun/Rzbtas TGRxcZ/6D6PN/datKimxmTIxVVkkrqfsHhYSSFSijU9UmI2LX5ffEZXc0 lYf4r+vOGCtUscVfti6zUJNldYS/+nWH1YPHI1TvFRmkL7QDRrTRVZAxJ QDH8J3nKapQPUHnTxJug9uNOQPqmXyfo1H1jJG1TvuJj6QRPdkW8+yABL 6BKRsSEzdtsz8qw7Nst7AUtaV6DwaFiPdRQtaqd/M4Kmt/0Hx+XqzmH// RvazgKtWF5NL56peU+biZE+a4ZRDxjeXk0qwMs6U+cg4y9rwdVrc0UPse A==; X-IronPort-AV: E=McAfee;i="6600,9927,10915"; a="12700284" X-IronPort-AV: E=Sophos;i="6.04,254,1695711600"; d="scan'208";a="12700284" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Dec 2023 17:01:33 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10915"; a="841655241" X-IronPort-AV: E=Sophos;i="6.04,254,1695711600"; d="scan'208";a="841655241" Received: from jbrandeb-spr1.jf.intel.com ([10.166.28.233]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Dec 2023 17:01:32 -0800 From: Jesse Brandeburg To: intel-wired-lan@lists.osuosl.org Cc: Jesse Brandeburg , netdev@vger.kernel.org, aleksander.lobakin@intel.com, przemyslaw.kitszel@intel.com, horms@kernel.org, marcin.szycik@linux.intel.com, Julia Lawall Subject: [PATCH iwl-next v2 04/15] i40e: field prep conversion Date: Tue, 5 Dec 2023 17:01:03 -0800 Message-Id: <20231206010114.2259388-5-jesse.brandeburg@intel.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20231206010114.2259388-1-jesse.brandeburg@intel.com> References: <20231206010114.2259388-1-jesse.brandeburg@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Refactor i40e driver to use FIELD_PREP(), which reduces lines of code and adds clarity of intent. This code was generated by the following coccinelle/spatch script and then manually repaired. Refactor one function with multiple if's to return quickly to make lines fit in 80 columns. @prep2@ constant shift,mask; type T; expression a; @@ -(((T)(a) << shift) & mask) +FIELD_PREP(mask, a) @prep@ constant shift,mask; type T; expression a; @@ -((T)((a) << shift) & mask) +FIELD_PREP(mask, a) Cc: Julia Lawall Reviewed-by: Marcin Szycik Reviewed-by: Simon Horman Signed-off-by: Jesse Brandeburg Tested-by: Pucha Himasekhar Reddy (A Contingent worker at Intel) --- v2: updated commit message --- drivers/net/ethernet/intel/i40e/i40e_common.c | 83 ++++++------- drivers/net/ethernet/intel/i40e/i40e_dcb.c | 116 ++++++++---------- drivers/net/ethernet/intel/i40e/i40e_main.c | 12 +- drivers/net/ethernet/intel/i40e/i40e_txrx.c | 41 +++---- .../ethernet/intel/i40e/i40e_virtchnl_pf.c | 8 +- 5 files changed, 109 insertions(+), 151 deletions(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e_common.c b/drivers/net/ethernet/intel/i40e/i40e_common.c index 522cf2e5f365..4ec4ab2c7d48 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_common.c +++ b/drivers/net/ethernet/intel/i40e/i40e_common.c @@ -249,6 +249,7 @@ static int i40e_aq_get_set_rss_lut(struct i40e_hw *hw, struct i40e_aqc_get_set_rss_lut *cmd_resp = (struct i40e_aqc_get_set_rss_lut *)&desc.params.raw; int status; + u16 flags; if (set) i40e_fill_default_direct_cmd_desc(&desc, @@ -261,23 +262,18 @@ static int i40e_aq_get_set_rss_lut(struct i40e_hw *hw, desc.flags |= cpu_to_le16((u16)I40E_AQ_FLAG_BUF); desc.flags |= cpu_to_le16((u16)I40E_AQ_FLAG_RD); - cmd_resp->vsi_id = - cpu_to_le16((u16)((vsi_id << - I40E_AQC_SET_RSS_LUT_VSI_ID_SHIFT) & - I40E_AQC_SET_RSS_LUT_VSI_ID_MASK)); - cmd_resp->vsi_id |= cpu_to_le16((u16)I40E_AQC_SET_RSS_LUT_VSI_VALID); + vsi_id = FIELD_PREP(I40E_AQC_SET_RSS_LUT_VSI_ID_MASK, vsi_id) | + FIELD_PREP(I40E_AQC_SET_RSS_LUT_VSI_VALID, 1); + cmd_resp->vsi_id = cpu_to_le16(vsi_id); if (pf_lut) - cmd_resp->flags |= cpu_to_le16((u16) - ((I40E_AQC_SET_RSS_LUT_TABLE_TYPE_PF << - I40E_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT) & - I40E_AQC_SET_RSS_LUT_TABLE_TYPE_MASK)); + flags = FIELD_PREP(I40E_AQC_SET_RSS_LUT_TABLE_TYPE_MASK, + I40E_AQC_SET_RSS_LUT_TABLE_TYPE_PF); else - cmd_resp->flags |= cpu_to_le16((u16) - ((I40E_AQC_SET_RSS_LUT_TABLE_TYPE_VSI << - I40E_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT) & - I40E_AQC_SET_RSS_LUT_TABLE_TYPE_MASK)); + flags = FIELD_PREP(I40E_AQC_SET_RSS_LUT_TABLE_TYPE_MASK, + I40E_AQC_SET_RSS_LUT_TABLE_TYPE_VSI); + cmd_resp->flags = cpu_to_le16(flags); status = i40e_asq_send_command(hw, &desc, lut, lut_size, NULL); return status; @@ -347,11 +343,9 @@ static int i40e_aq_get_set_rss_key(struct i40e_hw *hw, desc.flags |= cpu_to_le16((u16)I40E_AQ_FLAG_BUF); desc.flags |= cpu_to_le16((u16)I40E_AQ_FLAG_RD); - cmd_resp->vsi_id = - cpu_to_le16((u16)((vsi_id << - I40E_AQC_SET_RSS_KEY_VSI_ID_SHIFT) & - I40E_AQC_SET_RSS_KEY_VSI_ID_MASK)); - cmd_resp->vsi_id |= cpu_to_le16((u16)I40E_AQC_SET_RSS_KEY_VSI_VALID); + vsi_id = FIELD_PREP(I40E_AQC_SET_RSS_KEY_VSI_ID_MASK, vsi_id) | + FIELD_PREP(I40E_AQC_SET_RSS_KEY_VSI_VALID, 1); + cmd_resp->vsi_id = cpu_to_le16(vsi_id); status = i40e_asq_send_command(hw, &desc, key, key_size, NULL); @@ -1289,14 +1283,14 @@ void i40e_led_set(struct i40e_hw *hw, u32 mode, bool blink) pin_func = I40E_PIN_FUNC_LED; gpio_val &= ~I40E_GLGEN_GPIO_CTL_PIN_FUNC_MASK; - gpio_val |= ((pin_func << - I40E_GLGEN_GPIO_CTL_PIN_FUNC_SHIFT) & - I40E_GLGEN_GPIO_CTL_PIN_FUNC_MASK); + gpio_val |= + FIELD_PREP(I40E_GLGEN_GPIO_CTL_PIN_FUNC_MASK, + pin_func); } gpio_val &= ~I40E_GLGEN_GPIO_CTL_LED_MODE_MASK; /* this & is a bit of paranoia, but serves as a range check */ - gpio_val |= ((mode << I40E_GLGEN_GPIO_CTL_LED_MODE_SHIFT) & - I40E_GLGEN_GPIO_CTL_LED_MODE_MASK); + gpio_val |= FIELD_PREP(I40E_GLGEN_GPIO_CTL_LED_MODE_MASK, + mode); if (blink) gpio_val |= BIT(I40E_GLGEN_GPIO_CTL_LED_BLINK_SHIFT); @@ -3515,8 +3509,7 @@ int i40e_aq_get_lldp_mib(struct i40e_hw *hw, u8 bridge_type, desc.flags |= cpu_to_le16((u16)I40E_AQ_FLAG_BUF); cmd->type = mib_type & I40E_AQ_LLDP_MIB_TYPE_MASK; - cmd->type |= ((bridge_type << I40E_AQ_LLDP_BRIDGE_TYPE_SHIFT) & - I40E_AQ_LLDP_BRIDGE_TYPE_MASK); + cmd->type |= FIELD_PREP(I40E_AQ_LLDP_BRIDGE_TYPE_MASK, bridge_type); desc.datalen = cpu_to_le16(buff_size); @@ -4234,30 +4227,25 @@ int i40e_set_filter_control(struct i40e_hw *hw, /* Program required PE hash buckets for the PF */ val &= ~I40E_PFQF_CTL_0_PEHSIZE_MASK; - val |= ((u32)settings->pe_filt_num << I40E_PFQF_CTL_0_PEHSIZE_SHIFT) & - I40E_PFQF_CTL_0_PEHSIZE_MASK; + val |= FIELD_PREP(I40E_PFQF_CTL_0_PEHSIZE_MASK, settings->pe_filt_num); /* Program required PE contexts for the PF */ val &= ~I40E_PFQF_CTL_0_PEDSIZE_MASK; - val |= ((u32)settings->pe_cntx_num << I40E_PFQF_CTL_0_PEDSIZE_SHIFT) & - I40E_PFQF_CTL_0_PEDSIZE_MASK; + val |= FIELD_PREP(I40E_PFQF_CTL_0_PEDSIZE_MASK, settings->pe_cntx_num); /* Program required FCoE hash buckets for the PF */ val &= ~I40E_PFQF_CTL_0_PFFCHSIZE_MASK; - val |= ((u32)settings->fcoe_filt_num << - I40E_PFQF_CTL_0_PFFCHSIZE_SHIFT) & - I40E_PFQF_CTL_0_PFFCHSIZE_MASK; + val |= FIELD_PREP(I40E_PFQF_CTL_0_PFFCHSIZE_MASK, + settings->fcoe_filt_num); /* Program required FCoE DDP contexts for the PF */ val &= ~I40E_PFQF_CTL_0_PFFCDSIZE_MASK; - val |= ((u32)settings->fcoe_cntx_num << - I40E_PFQF_CTL_0_PFFCDSIZE_SHIFT) & - I40E_PFQF_CTL_0_PFFCDSIZE_MASK; + val |= FIELD_PREP(I40E_PFQF_CTL_0_PFFCDSIZE_MASK, + settings->fcoe_cntx_num); /* Program Hash LUT size for the PF */ val &= ~I40E_PFQF_CTL_0_HASHLUTSIZE_MASK; if (settings->hash_lut_size == I40E_HASH_LUT_SIZE_512) hash_lut_size = 1; - val |= (hash_lut_size << I40E_PFQF_CTL_0_HASHLUTSIZE_SHIFT) & - I40E_PFQF_CTL_0_HASHLUTSIZE_MASK; + val |= FIELD_PREP(I40E_PFQF_CTL_0_HASHLUTSIZE_MASK, hash_lut_size); /* Enable FDIR, Ethertype and MACVLAN filters for PF and VFs */ if (settings->enable_fdir) @@ -5319,16 +5307,17 @@ static void i40e_mdio_if_number_selection(struct i40e_hw *hw, bool set_mdio, u8 mdio_num, struct i40e_aqc_phy_register_access *cmd) { - if (set_mdio && cmd->phy_interface == I40E_AQ_PHY_REG_ACCESS_EXTERNAL) { - if (test_bit(I40E_HW_CAP_AQ_PHY_ACCESS_EXTENDED, hw->caps)) - cmd->cmd_flags |= - I40E_AQ_PHY_REG_ACCESS_SET_MDIO_IF_NUMBER | - ((mdio_num << - I40E_AQ_PHY_REG_ACCESS_MDIO_IF_NUMBER_SHIFT) & - I40E_AQ_PHY_REG_ACCESS_MDIO_IF_NUMBER_MASK); - else - i40e_debug(hw, I40E_DEBUG_PHY, - "MDIO I/F number selection not supported by current FW version.\n"); + if (!set_mdio || + cmd->phy_interface != I40E_AQ_PHY_REG_ACCESS_EXTERNAL) + return; + + if (test_bit(I40E_HW_CAP_AQ_PHY_ACCESS_EXTENDED, hw->caps)) { + cmd->cmd_flags |= + I40E_AQ_PHY_REG_ACCESS_SET_MDIO_IF_NUMBER | + FIELD_PREP(I40E_AQ_PHY_REG_ACCESS_MDIO_IF_NUMBER_MASK, + mdio_num); + } else { + i40e_debug(hw, I40E_DEBUG_PHY, "MDIO I/F number selection not supported by current FW version.\n"); } } diff --git a/drivers/net/ethernet/intel/i40e/i40e_dcb.c b/drivers/net/ethernet/intel/i40e/i40e_dcb.c index 0d334036ab8b..a0691b7c87c4 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_dcb.c +++ b/drivers/net/ethernet/intel/i40e/i40e_dcb.c @@ -1320,20 +1320,16 @@ void i40e_dcb_hw_rx_fifo_config(struct i40e_hw *hw, u32 reg = rd32(hw, I40E_PRTDCB_RETSC); reg &= ~I40E_PRTDCB_RETSC_ETS_MODE_MASK; - reg |= ((u32)ets_mode << I40E_PRTDCB_RETSC_ETS_MODE_SHIFT) & - I40E_PRTDCB_RETSC_ETS_MODE_MASK; + reg |= FIELD_PREP(I40E_PRTDCB_RETSC_ETS_MODE_MASK, ets_mode); reg &= ~I40E_PRTDCB_RETSC_NON_ETS_MODE_MASK; - reg |= ((u32)non_ets_mode << I40E_PRTDCB_RETSC_NON_ETS_MODE_SHIFT) & - I40E_PRTDCB_RETSC_NON_ETS_MODE_MASK; + reg |= FIELD_PREP(I40E_PRTDCB_RETSC_NON_ETS_MODE_MASK, non_ets_mode); reg &= ~I40E_PRTDCB_RETSC_ETS_MAX_EXP_MASK; - reg |= (max_exponent << I40E_PRTDCB_RETSC_ETS_MAX_EXP_SHIFT) & - I40E_PRTDCB_RETSC_ETS_MAX_EXP_MASK; + reg |= FIELD_PREP(I40E_PRTDCB_RETSC_ETS_MAX_EXP_MASK, max_exponent); reg &= ~I40E_PRTDCB_RETSC_LLTC_MASK; - reg |= (lltc_map << I40E_PRTDCB_RETSC_LLTC_SHIFT) & - I40E_PRTDCB_RETSC_LLTC_MASK; + reg |= FIELD_PREP(I40E_PRTDCB_RETSC_LLTC_MASK, lltc_map); wr32(hw, I40E_PRTDCB_RETSC, reg); } @@ -1388,14 +1384,12 @@ void i40e_dcb_hw_rx_cmd_monitor_config(struct i40e_hw *hw, */ reg = rd32(hw, I40E_PRT_SWR_PM_THR); reg &= ~I40E_PRT_SWR_PM_THR_THRESHOLD_MASK; - reg |= (threshold << I40E_PRT_SWR_PM_THR_THRESHOLD_SHIFT) & - I40E_PRT_SWR_PM_THR_THRESHOLD_MASK; + reg |= FIELD_PREP(I40E_PRT_SWR_PM_THR_THRESHOLD_MASK, threshold); wr32(hw, I40E_PRT_SWR_PM_THR, reg); reg = rd32(hw, I40E_PRTDCB_RPPMC); reg &= ~I40E_PRTDCB_RPPMC_RX_FIFO_SIZE_MASK; - reg |= (fifo_size << I40E_PRTDCB_RPPMC_RX_FIFO_SIZE_SHIFT) & - I40E_PRTDCB_RPPMC_RX_FIFO_SIZE_MASK; + reg |= FIELD_PREP(I40E_PRTDCB_RPPMC_RX_FIFO_SIZE_MASK, fifo_size); wr32(hw, I40E_PRTDCB_RPPMC, reg); } @@ -1437,19 +1431,17 @@ void i40e_dcb_hw_pfc_config(struct i40e_hw *hw, reg &= ~I40E_PRTDCB_MFLCN_RFCE_MASK; reg &= ~I40E_PRTDCB_MFLCN_RPFCE_MASK; if (pfc_en) { - reg |= BIT(I40E_PRTDCB_MFLCN_RPFCM_SHIFT) & - I40E_PRTDCB_MFLCN_RPFCM_MASK; - reg |= ((u32)pfc_en << I40E_PRTDCB_MFLCN_RPFCE_SHIFT) & - I40E_PRTDCB_MFLCN_RPFCE_MASK; + reg |= FIELD_PREP(I40E_PRTDCB_MFLCN_RPFCM_MASK, 1); + reg |= FIELD_PREP(I40E_PRTDCB_MFLCN_RPFCE_MASK, + pfc_en); } wr32(hw, I40E_PRTDCB_MFLCN, reg); reg = rd32(hw, I40E_PRTDCB_FCCFG); reg &= ~I40E_PRTDCB_FCCFG_TFCE_MASK; if (pfc_en) - reg |= (I40E_DCB_PFC_ENABLED << - I40E_PRTDCB_FCCFG_TFCE_SHIFT) & - I40E_PRTDCB_FCCFG_TFCE_MASK; + reg |= FIELD_PREP(I40E_PRTDCB_FCCFG_TFCE_MASK, + I40E_DCB_PFC_ENABLED); wr32(hw, I40E_PRTDCB_FCCFG, reg); /* FCTTV and FCRTV to be set by default */ @@ -1467,25 +1459,22 @@ void i40e_dcb_hw_pfc_config(struct i40e_hw *hw, reg = rd32(hw, I40E_PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE); reg &= ~I40E_PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE_MASK; - reg |= ((u32)pfc_en << - I40E_PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE_SHIFT) & - I40E_PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE_MASK; + reg |= FIELD_PREP(I40E_PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE_MASK, + pfc_en); wr32(hw, I40E_PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE, reg); reg = rd32(hw, I40E_PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE); reg &= ~I40E_PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE_MASK; - reg |= ((u32)pfc_en << - I40E_PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE_SHIFT) & - I40E_PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE_MASK; + reg |= FIELD_PREP(I40E_PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE_MASK, + pfc_en); wr32(hw, I40E_PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE, reg); for (i = 0; i < I40E_PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_MAX_INDEX; i++) { reg = rd32(hw, I40E_PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER(i)); reg &= ~I40E_PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_MASK; if (pfc_en) { - reg |= ((u32)refresh_time << - I40E_PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_SHIFT) & - I40E_PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_MASK; + reg |= FIELD_PREP(I40E_PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_MASK, + refresh_time); } wr32(hw, I40E_PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER(i), reg); } @@ -1497,14 +1486,12 @@ void i40e_dcb_hw_pfc_config(struct i40e_hw *hw, reg = rd32(hw, I40E_PRTDCB_TC2PFC); reg &= ~I40E_PRTDCB_TC2PFC_TC2PFC_MASK; - reg |= ((u32)tc2pfc << I40E_PRTDCB_TC2PFC_TC2PFC_SHIFT) & - I40E_PRTDCB_TC2PFC_TC2PFC_MASK; + reg |= FIELD_PREP(I40E_PRTDCB_TC2PFC_TC2PFC_MASK, tc2pfc); wr32(hw, I40E_PRTDCB_TC2PFC, reg); reg = rd32(hw, I40E_PRTDCB_RUP); reg &= ~I40E_PRTDCB_RUP_NOVLANUP_MASK; - reg |= ((u32)first_pfc_prio << I40E_PRTDCB_RUP_NOVLANUP_SHIFT) & - I40E_PRTDCB_RUP_NOVLANUP_MASK; + reg |= FIELD_PREP(I40E_PRTDCB_RUP_NOVLANUP_MASK, first_pfc_prio); wr32(hw, I40E_PRTDCB_RUP, reg); reg = rd32(hw, I40E_PRTDCB_TDPMC); @@ -1536,8 +1523,7 @@ void i40e_dcb_hw_set_num_tc(struct i40e_hw *hw, u8 num_tc) u32 reg = rd32(hw, I40E_PRTDCB_GENC); reg &= ~I40E_PRTDCB_GENC_NUMTC_MASK; - reg |= ((u32)num_tc << I40E_PRTDCB_GENC_NUMTC_SHIFT) & - I40E_PRTDCB_GENC_NUMTC_MASK; + reg |= FIELD_PREP(I40E_PRTDCB_GENC_NUMTC_MASK, num_tc); wr32(hw, I40E_PRTDCB_GENC, reg); } @@ -1576,12 +1562,12 @@ void i40e_dcb_hw_rx_ets_bw_config(struct i40e_hw *hw, u8 *bw_share, reg &= ~(I40E_PRTDCB_RETSTCC_BWSHARE_MASK | I40E_PRTDCB_RETSTCC_UPINTC_MODE_MASK | I40E_PRTDCB_RETSTCC_ETSTC_SHIFT); - reg |= ((u32)bw_share[i] << I40E_PRTDCB_RETSTCC_BWSHARE_SHIFT) & - I40E_PRTDCB_RETSTCC_BWSHARE_MASK; - reg |= ((u32)mode[i] << I40E_PRTDCB_RETSTCC_UPINTC_MODE_SHIFT) & - I40E_PRTDCB_RETSTCC_UPINTC_MODE_MASK; - reg |= ((u32)prio_type[i] << I40E_PRTDCB_RETSTCC_ETSTC_SHIFT) & - I40E_PRTDCB_RETSTCC_ETSTC_MASK; + reg |= FIELD_PREP(I40E_PRTDCB_RETSTCC_BWSHARE_MASK, + bw_share[i]); + reg |= FIELD_PREP(I40E_PRTDCB_RETSTCC_UPINTC_MODE_MASK, + mode[i]); + reg |= FIELD_PREP(I40E_PRTDCB_RETSTCC_ETSTC_MASK, + prio_type[i]); wr32(hw, I40E_PRTDCB_RETSTCC(i), reg); } } @@ -1721,8 +1707,7 @@ void i40e_dcb_hw_rx_pb_config(struct i40e_hw *hw, if (new_val < old_val) { reg = rd32(hw, I40E_PRTRPB_SLW); reg &= ~I40E_PRTRPB_SLW_SLW_MASK; - reg |= (new_val << I40E_PRTRPB_SLW_SLW_SHIFT) & - I40E_PRTRPB_SLW_SLW_MASK; + reg |= FIELD_PREP(I40E_PRTRPB_SLW_SLW_MASK, new_val); wr32(hw, I40E_PRTRPB_SLW, reg); } @@ -1735,8 +1720,8 @@ void i40e_dcb_hw_rx_pb_config(struct i40e_hw *hw, if (new_val < old_val) { reg = rd32(hw, I40E_PRTRPB_SLT(i)); reg &= ~I40E_PRTRPB_SLT_SLT_TCN_MASK; - reg |= (new_val << I40E_PRTRPB_SLT_SLT_TCN_SHIFT) & - I40E_PRTRPB_SLT_SLT_TCN_MASK; + reg |= FIELD_PREP(I40E_PRTRPB_SLT_SLT_TCN_MASK, + new_val); wr32(hw, I40E_PRTRPB_SLT(i), reg); } @@ -1745,8 +1730,8 @@ void i40e_dcb_hw_rx_pb_config(struct i40e_hw *hw, if (new_val < old_val) { reg = rd32(hw, I40E_PRTRPB_DLW(i)); reg &= ~I40E_PRTRPB_DLW_DLW_TCN_MASK; - reg |= (new_val << I40E_PRTRPB_DLW_DLW_TCN_SHIFT) & - I40E_PRTRPB_DLW_DLW_TCN_MASK; + reg |= FIELD_PREP(I40E_PRTRPB_DLW_DLW_TCN_MASK, + new_val); wr32(hw, I40E_PRTRPB_DLW(i), reg); } } @@ -1757,8 +1742,7 @@ void i40e_dcb_hw_rx_pb_config(struct i40e_hw *hw, if (new_val < old_val) { reg = rd32(hw, I40E_PRTRPB_SHW); reg &= ~I40E_PRTRPB_SHW_SHW_MASK; - reg |= (new_val << I40E_PRTRPB_SHW_SHW_SHIFT) & - I40E_PRTRPB_SHW_SHW_MASK; + reg |= FIELD_PREP(I40E_PRTRPB_SHW_SHW_MASK, new_val); wr32(hw, I40E_PRTRPB_SHW, reg); } @@ -1771,8 +1755,8 @@ void i40e_dcb_hw_rx_pb_config(struct i40e_hw *hw, if (new_val < old_val) { reg = rd32(hw, I40E_PRTRPB_SHT(i)); reg &= ~I40E_PRTRPB_SHT_SHT_TCN_MASK; - reg |= (new_val << I40E_PRTRPB_SHT_SHT_TCN_SHIFT) & - I40E_PRTRPB_SHT_SHT_TCN_MASK; + reg |= FIELD_PREP(I40E_PRTRPB_SHT_SHT_TCN_MASK, + new_val); wr32(hw, I40E_PRTRPB_SHT(i), reg); } @@ -1781,8 +1765,8 @@ void i40e_dcb_hw_rx_pb_config(struct i40e_hw *hw, if (new_val < old_val) { reg = rd32(hw, I40E_PRTRPB_DHW(i)); reg &= ~I40E_PRTRPB_DHW_DHW_TCN_MASK; - reg |= (new_val << I40E_PRTRPB_DHW_DHW_TCN_SHIFT) & - I40E_PRTRPB_DHW_DHW_TCN_MASK; + reg |= FIELD_PREP(I40E_PRTRPB_DHW_DHW_TCN_MASK, + new_val); wr32(hw, I40E_PRTRPB_DHW(i), reg); } } @@ -1792,8 +1776,7 @@ void i40e_dcb_hw_rx_pb_config(struct i40e_hw *hw, new_val = new_pb_cfg->tc_pool_size[i]; reg = rd32(hw, I40E_PRTRPB_DPS(i)); reg &= ~I40E_PRTRPB_DPS_DPS_TCN_MASK; - reg |= (new_val << I40E_PRTRPB_DPS_DPS_TCN_SHIFT) & - I40E_PRTRPB_DPS_DPS_TCN_MASK; + reg |= FIELD_PREP(I40E_PRTRPB_DPS_DPS_TCN_MASK, new_val); wr32(hw, I40E_PRTRPB_DPS(i), reg); } @@ -1801,8 +1784,7 @@ void i40e_dcb_hw_rx_pb_config(struct i40e_hw *hw, new_val = new_pb_cfg->shared_pool_size; reg = rd32(hw, I40E_PRTRPB_SPS); reg &= ~I40E_PRTRPB_SPS_SPS_MASK; - reg |= (new_val << I40E_PRTRPB_SPS_SPS_SHIFT) & - I40E_PRTRPB_SPS_SPS_MASK; + reg |= FIELD_PREP(I40E_PRTRPB_SPS_SPS_MASK, new_val); wr32(hw, I40E_PRTRPB_SPS, reg); /* Program the shared pool low water mark per port if increasing */ @@ -1811,8 +1793,7 @@ void i40e_dcb_hw_rx_pb_config(struct i40e_hw *hw, if (new_val > old_val) { reg = rd32(hw, I40E_PRTRPB_SLW); reg &= ~I40E_PRTRPB_SLW_SLW_MASK; - reg |= (new_val << I40E_PRTRPB_SLW_SLW_SHIFT) & - I40E_PRTRPB_SLW_SLW_MASK; + reg |= FIELD_PREP(I40E_PRTRPB_SLW_SLW_MASK, new_val); wr32(hw, I40E_PRTRPB_SLW, reg); } @@ -1825,8 +1806,8 @@ void i40e_dcb_hw_rx_pb_config(struct i40e_hw *hw, if (new_val > old_val) { reg = rd32(hw, I40E_PRTRPB_SLT(i)); reg &= ~I40E_PRTRPB_SLT_SLT_TCN_MASK; - reg |= (new_val << I40E_PRTRPB_SLT_SLT_TCN_SHIFT) & - I40E_PRTRPB_SLT_SLT_TCN_MASK; + reg |= FIELD_PREP(I40E_PRTRPB_SLT_SLT_TCN_MASK, + new_val); wr32(hw, I40E_PRTRPB_SLT(i), reg); } @@ -1835,8 +1816,8 @@ void i40e_dcb_hw_rx_pb_config(struct i40e_hw *hw, if (new_val > old_val) { reg = rd32(hw, I40E_PRTRPB_DLW(i)); reg &= ~I40E_PRTRPB_DLW_DLW_TCN_MASK; - reg |= (new_val << I40E_PRTRPB_DLW_DLW_TCN_SHIFT) & - I40E_PRTRPB_DLW_DLW_TCN_MASK; + reg |= FIELD_PREP(I40E_PRTRPB_DLW_DLW_TCN_MASK, + new_val); wr32(hw, I40E_PRTRPB_DLW(i), reg); } } @@ -1847,8 +1828,7 @@ void i40e_dcb_hw_rx_pb_config(struct i40e_hw *hw, if (new_val > old_val) { reg = rd32(hw, I40E_PRTRPB_SHW); reg &= ~I40E_PRTRPB_SHW_SHW_MASK; - reg |= (new_val << I40E_PRTRPB_SHW_SHW_SHIFT) & - I40E_PRTRPB_SHW_SHW_MASK; + reg |= FIELD_PREP(I40E_PRTRPB_SHW_SHW_MASK, new_val); wr32(hw, I40E_PRTRPB_SHW, reg); } @@ -1861,8 +1841,8 @@ void i40e_dcb_hw_rx_pb_config(struct i40e_hw *hw, if (new_val > old_val) { reg = rd32(hw, I40E_PRTRPB_SHT(i)); reg &= ~I40E_PRTRPB_SHT_SHT_TCN_MASK; - reg |= (new_val << I40E_PRTRPB_SHT_SHT_TCN_SHIFT) & - I40E_PRTRPB_SHT_SHT_TCN_MASK; + reg |= FIELD_PREP(I40E_PRTRPB_SHT_SHT_TCN_MASK, + new_val); wr32(hw, I40E_PRTRPB_SHT(i), reg); } @@ -1871,8 +1851,8 @@ void i40e_dcb_hw_rx_pb_config(struct i40e_hw *hw, if (new_val > old_val) { reg = rd32(hw, I40E_PRTRPB_DHW(i)); reg &= ~I40E_PRTRPB_DHW_DHW_TCN_MASK; - reg |= (new_val << I40E_PRTRPB_DHW_DHW_TCN_SHIFT) & - I40E_PRTRPB_DHW_DHW_TCN_MASK; + reg |= FIELD_PREP(I40E_PRTRPB_DHW_DHW_TCN_MASK, + new_val); wr32(hw, I40E_PRTRPB_DHW(i), reg); } } diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c index 51ee870ffa36..0dfe472747c6 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_main.c +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c @@ -3536,21 +3536,19 @@ static int i40e_configure_tx_ring(struct i40e_ring *ring) else return -EINVAL; - qtx_ctl |= (ring->ch->vsi_number << - I40E_QTX_CTL_VFVM_INDX_SHIFT) & - I40E_QTX_CTL_VFVM_INDX_MASK; + qtx_ctl |= FIELD_PREP(I40E_QTX_CTL_VFVM_INDX_MASK, + ring->ch->vsi_number); } else { if (vsi->type == I40E_VSI_VMDQ2) { qtx_ctl = I40E_QTX_CTL_VM_QUEUE; - qtx_ctl |= ((vsi->id) << I40E_QTX_CTL_VFVM_INDX_SHIFT) & - I40E_QTX_CTL_VFVM_INDX_MASK; + qtx_ctl |= FIELD_PREP(I40E_QTX_CTL_VFVM_INDX_MASK, + vsi->id); } else { qtx_ctl = I40E_QTX_CTL_PF_QUEUE; } } - qtx_ctl |= ((hw->pf_id << I40E_QTX_CTL_PF_INDX_SHIFT) & - I40E_QTX_CTL_PF_INDX_MASK); + qtx_ctl |= FIELD_PREP(I40E_QTX_CTL_PF_INDX_MASK, hw->pf_id); wr32(hw, I40E_QTX_CTL(pf_q), qtx_ctl); i40e_flush(hw); diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c index b82df5bdfac0..b0df3dde1386 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c @@ -33,19 +33,16 @@ static void i40e_fdir(struct i40e_ring *tx_ring, i++; tx_ring->next_to_use = (i < tx_ring->count) ? i : 0; - flex_ptype = I40E_TXD_FLTR_QW0_QINDEX_MASK & - (fdata->q_index << I40E_TXD_FLTR_QW0_QINDEX_SHIFT); + flex_ptype = FIELD_PREP(I40E_TXD_FLTR_QW0_QINDEX_MASK, fdata->q_index); - flex_ptype |= I40E_TXD_FLTR_QW0_FLEXOFF_MASK & - (fdata->flex_off << I40E_TXD_FLTR_QW0_FLEXOFF_SHIFT); + flex_ptype |= FIELD_PREP(I40E_TXD_FLTR_QW0_FLEXOFF_MASK, + fdata->flex_off); - flex_ptype |= I40E_TXD_FLTR_QW0_PCTYPE_MASK & - (fdata->pctype << I40E_TXD_FLTR_QW0_PCTYPE_SHIFT); + flex_ptype |= FIELD_PREP(I40E_TXD_FLTR_QW0_PCTYPE_MASK, fdata->pctype); /* Use LAN VSI Id if not programmed by user */ - flex_ptype |= I40E_TXD_FLTR_QW0_DEST_VSI_MASK & - ((u32)(fdata->dest_vsi ? : pf->vsi[pf->lan_vsi]->id) << - I40E_TXD_FLTR_QW0_DEST_VSI_SHIFT); + flex_ptype |= FIELD_PREP(I40E_TXD_FLTR_QW0_DEST_VSI_MASK, + fdata->dest_vsi ? : pf->vsi[pf->lan_vsi]->id); dtype_cmd = I40E_TX_DESC_DTYPE_FILTER_PROG; @@ -55,17 +52,15 @@ static void i40e_fdir(struct i40e_ring *tx_ring, I40E_FILTER_PROGRAM_DESC_PCMD_REMOVE << I40E_TXD_FLTR_QW1_PCMD_SHIFT; - dtype_cmd |= I40E_TXD_FLTR_QW1_DEST_MASK & - (fdata->dest_ctl << I40E_TXD_FLTR_QW1_DEST_SHIFT); + dtype_cmd |= FIELD_PREP(I40E_TXD_FLTR_QW1_DEST_MASK, fdata->dest_ctl); - dtype_cmd |= I40E_TXD_FLTR_QW1_FD_STATUS_MASK & - (fdata->fd_status << I40E_TXD_FLTR_QW1_FD_STATUS_SHIFT); + dtype_cmd |= FIELD_PREP(I40E_TXD_FLTR_QW1_FD_STATUS_MASK, + fdata->fd_status); if (fdata->cnt_index) { dtype_cmd |= I40E_TXD_FLTR_QW1_CNT_ENA_MASK; - dtype_cmd |= I40E_TXD_FLTR_QW1_CNTINDEX_MASK & - ((u32)fdata->cnt_index << - I40E_TXD_FLTR_QW1_CNTINDEX_SHIFT); + dtype_cmd |= FIELD_PREP(I40E_TXD_FLTR_QW1_CNTINDEX_MASK, + fdata->cnt_index); } fdir_desc->qindex_flex_ptype_vsi = cpu_to_le32(flex_ptype); @@ -2959,8 +2954,8 @@ static void i40e_atr(struct i40e_ring *tx_ring, struct sk_buff *skb, i++; tx_ring->next_to_use = (i < tx_ring->count) ? i : 0; - flex_ptype = (tx_ring->queue_index << I40E_TXD_FLTR_QW0_QINDEX_SHIFT) & - I40E_TXD_FLTR_QW0_QINDEX_MASK; + flex_ptype = FIELD_PREP(I40E_TXD_FLTR_QW0_QINDEX_MASK, + tx_ring->queue_index); flex_ptype |= (tx_flags & I40E_TX_FLAGS_IPV4) ? (I40E_FILTER_PCTYPE_NONF_IPV4_TCP << I40E_TXD_FLTR_QW0_PCTYPE_SHIFT) : @@ -2986,14 +2981,12 @@ static void i40e_atr(struct i40e_ring *tx_ring, struct sk_buff *skb, dtype_cmd |= I40E_TXD_FLTR_QW1_CNT_ENA_MASK; if (!(tx_flags & I40E_TX_FLAGS_UDP_TUNNEL)) dtype_cmd |= - ((u32)I40E_FD_ATR_STAT_IDX(pf->hw.pf_id) << - I40E_TXD_FLTR_QW1_CNTINDEX_SHIFT) & - I40E_TXD_FLTR_QW1_CNTINDEX_MASK; + FIELD_PREP(I40E_TXD_FLTR_QW1_CNTINDEX_MASK, + I40E_FD_ATR_STAT_IDX(pf->hw.pf_id)); else dtype_cmd |= - ((u32)I40E_FD_ATR_TUNNEL_STAT_IDX(pf->hw.pf_id) << - I40E_TXD_FLTR_QW1_CNTINDEX_SHIFT) & - I40E_TXD_FLTR_QW1_CNTINDEX_MASK; + FIELD_PREP(I40E_TXD_FLTR_QW1_CNTINDEX_MASK, + I40E_FD_ATR_TUNNEL_STAT_IDX(pf->hw.pf_id)); if (test_bit(I40E_FLAG_HW_ATR_EVICT_ENA, pf->flags)) dtype_cmd |= I40E_TXD_FLTR_QW1_ATR_MASK; diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c index 37cca484abb8..5a45c53e6770 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c @@ -659,11 +659,9 @@ static int i40e_config_vsi_tx_queue(struct i40e_vf *vf, u16 vsi_id, /* associate this queue with the PCI VF function */ qtx_ctl = I40E_QTX_CTL_VF_QUEUE; - qtx_ctl |= ((hw->pf_id << I40E_QTX_CTL_PF_INDX_SHIFT) - & I40E_QTX_CTL_PF_INDX_MASK); - qtx_ctl |= (((vf->vf_id + hw->func_caps.vf_base_id) - << I40E_QTX_CTL_VFVM_INDX_SHIFT) - & I40E_QTX_CTL_VFVM_INDX_MASK); + qtx_ctl |= FIELD_PREP(I40E_QTX_CTL_PF_INDX_MASK, hw->pf_id); + qtx_ctl |= FIELD_PREP(I40E_QTX_CTL_VFVM_INDX_MASK, + vf->vf_id + hw->func_caps.vf_base_id); wr32(hw, I40E_QTX_CTL(pf_queue_id), qtx_ctl); i40e_flush(hw); From patchwork Wed Dec 6 01:01:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jesse Brandeburg X-Patchwork-Id: 13480913 X-Patchwork-Delegate: kuba@kernel.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="UgRiYksr" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 284031B8 for ; Tue, 5 Dec 2023 17:01:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1701824496; x=1733360496; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=EXpQwc8v10if+MqhxX2w/xCKwjG7qVilqA6UyrO1aNQ=; b=UgRiYksri35Dp9BO2+P/dZ4NoUmLrr9Swgl3jM0hJ79T+/iHidEe/4gR tRL4WAZJyfOqSTxCO9KIW+QEaIu0/L6gRW9x3lQiterJQ1+FFTnpxX/ID B3mStzPynbvdCHanaZ8jDhWH3DV4bhLRpcdXPzjnR0efNpv6AHpX2fuem e23I19ZGI4VtLg/6Y4wgoQsCU+mlTFwhYKZuENlEEyka84xoG/ZUik2xA h4EPBV/JyWxi7e+S9JWosjoiMkAedjavpSr3DQ/14wYFnMEBq7NBysAnp +JWgiIogRj5Lm5DU+9P/ZbJRsOYHm0aVez0G4SXvsMowM/ABsFdQe7aTW A==; X-IronPort-AV: E=McAfee;i="6600,9927,10915"; a="12700288" X-IronPort-AV: E=Sophos;i="6.04,254,1695711600"; d="scan'208";a="12700288" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Dec 2023 17:01:34 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10915"; a="841655248" X-IronPort-AV: E=Sophos;i="6.04,254,1695711600"; d="scan'208";a="841655248" Received: from jbrandeb-spr1.jf.intel.com ([10.166.28.233]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Dec 2023 17:01:32 -0800 From: Jesse Brandeburg To: intel-wired-lan@lists.osuosl.org Cc: Jesse Brandeburg , netdev@vger.kernel.org, aleksander.lobakin@intel.com, przemyslaw.kitszel@intel.com, horms@kernel.org, marcin.szycik@linux.intel.com, Julia Lawall , Ahmed Zaki Subject: [PATCH iwl-next v2 05/15] iavf: field prep conversion Date: Tue, 5 Dec 2023 17:01:04 -0800 Message-Id: <20231206010114.2259388-6-jesse.brandeburg@intel.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20231206010114.2259388-1-jesse.brandeburg@intel.com> References: <20231206010114.2259388-1-jesse.brandeburg@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Refactor iavf driver to use FIELD_PREP(), which reduces lines of code and adds clarity of intent. This code was generated by the following coccinelle/spatch script and then manually repaired. Clean up a couple spots in the code that had repetitive y = cpu_to_*((blah << blah_blah) & blat) y |= cpu_to_*((blahs << blahs_blahs) & blats) to x = FIELD_PREP(blat blah) x |= FIELD_PREP(blats, blahs) y = cpu_to_*(x); @prep2@ constant shift,mask; type T; expression a; @@ -(((T)(a) << shift) & mask) +FIELD_PREP(mask, a) @prep@ constant shift,mask; type T; expression a; @@ -((T)((a) << shift) & mask) +FIELD_PREP(mask, a) Cc: Julia Lawall Cc: Ahmed Zaki Reviewed-by: Marcin Szycik Reviewed-by: Simon Horman Signed-off-by: Jesse Brandeburg Tested-by: Rafal Romanowski --- v2: updated commit message --- drivers/net/ethernet/intel/iavf/iavf_common.c | 31 ++++++++----------- drivers/net/ethernet/intel/iavf/iavf_fdir.c | 2 +- 2 files changed, 14 insertions(+), 19 deletions(-) diff --git a/drivers/net/ethernet/intel/iavf/iavf_common.c b/drivers/net/ethernet/intel/iavf/iavf_common.c index af5cc69f26e3..5a25233a89d5 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_common.c +++ b/drivers/net/ethernet/intel/iavf/iavf_common.c @@ -331,6 +331,7 @@ static enum iavf_status iavf_aq_get_set_rss_lut(struct iavf_hw *hw, struct iavf_aq_desc desc; struct iavf_aqc_get_set_rss_lut *cmd_resp = (struct iavf_aqc_get_set_rss_lut *)&desc.params.raw; + u16 flags; if (set) iavf_fill_default_direct_cmd_desc(&desc, @@ -343,22 +344,18 @@ static enum iavf_status iavf_aq_get_set_rss_lut(struct iavf_hw *hw, desc.flags |= cpu_to_le16((u16)IAVF_AQ_FLAG_BUF); desc.flags |= cpu_to_le16((u16)IAVF_AQ_FLAG_RD); - cmd_resp->vsi_id = - cpu_to_le16((u16)((vsi_id << - IAVF_AQC_SET_RSS_LUT_VSI_ID_SHIFT) & - IAVF_AQC_SET_RSS_LUT_VSI_ID_MASK)); - cmd_resp->vsi_id |= cpu_to_le16((u16)IAVF_AQC_SET_RSS_LUT_VSI_VALID); + vsi_id = FIELD_PREP(IAVF_AQC_SET_RSS_LUT_VSI_ID_MASK, vsi_id) | + FIELD_PREP(IAVF_AQC_SET_RSS_LUT_VSI_VALID, 1); + cmd_resp->vsi_id = cpu_to_le16(vsi_id); if (pf_lut) - cmd_resp->flags |= cpu_to_le16((u16) - ((IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_PF << - IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT) & - IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_MASK)); + flags = FIELD_PREP(IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_MASK, + IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_PF); else - cmd_resp->flags |= cpu_to_le16((u16) - ((IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_VSI << - IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT) & - IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_MASK)); + flags = FIELD_PREP(IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_MASK, + IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_VSI); + + cmd_resp->flags = cpu_to_le16(flags); status = iavf_asq_send_command(hw, &desc, lut, lut_size, NULL); @@ -412,11 +409,9 @@ iavf_status iavf_aq_get_set_rss_key(struct iavf_hw *hw, u16 vsi_id, desc.flags |= cpu_to_le16((u16)IAVF_AQ_FLAG_BUF); desc.flags |= cpu_to_le16((u16)IAVF_AQ_FLAG_RD); - cmd_resp->vsi_id = - cpu_to_le16((u16)((vsi_id << - IAVF_AQC_SET_RSS_KEY_VSI_ID_SHIFT) & - IAVF_AQC_SET_RSS_KEY_VSI_ID_MASK)); - cmd_resp->vsi_id |= cpu_to_le16((u16)IAVF_AQC_SET_RSS_KEY_VSI_VALID); + vsi_id = FIELD_PREP(IAVF_AQC_SET_RSS_KEY_VSI_ID_MASK, vsi_id) | + FIELD_PREP(IAVF_AQC_SET_RSS_KEY_VSI_VALID, 1); + cmd_resp->vsi_id = cpu_to_le16(vsi_id); status = iavf_asq_send_command(hw, &desc, key, key_size, NULL); diff --git a/drivers/net/ethernet/intel/iavf/iavf_fdir.c b/drivers/net/ethernet/intel/iavf/iavf_fdir.c index 65ddcd81c993..2d47b0b4640e 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_fdir.c +++ b/drivers/net/ethernet/intel/iavf/iavf_fdir.c @@ -358,7 +358,7 @@ iavf_fill_fdir_ip6_hdr(struct iavf_fdir_fltr *fltr, if (fltr->ip_mask.tclass == U8_MAX) { iph->priority = (fltr->ip_data.tclass >> 4) & 0xF; - iph->flow_lbl[0] = (fltr->ip_data.tclass << 4) & 0xF0; + iph->flow_lbl[0] = FIELD_PREP(0xF0, fltr->ip_data.tclass); VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, TC); } From patchwork Wed Dec 6 01:01:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jesse Brandeburg X-Patchwork-Id: 13480918 X-Patchwork-Delegate: kuba@kernel.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="S/NV6/9D" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 284591B9 for ; Tue, 5 Dec 2023 17:01:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1701824496; x=1733360496; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=TFG5aBQwkeIG7uAqAAvlruKHDJecPwgL4dn3O7n3fO8=; b=S/NV6/9DAMAPb8/tT8sgbZLp7XC4tP0zjFiRaSfg2o6zjDVY3QChcArl 3orUkxukAByrnzVRlBJ6vEOXrwwSzjc/+mBulUN/l8XtnvdBGwBed1mrF CJrKTA+K/6w+SfQZWXddLlP2Tpc566ikGaAzqtcbBO6NSaUow86YoFhu7 cx5MTUyIHZjOoexmtpsztDp+xj8NIIk+l1bjVDLgZEaFcUwVdgKeS8RbE USWV9JnZ6KK93hFdGrGkEhdbTSPl7ihBJf5LkRLy68Tl8XqFDGlQ3wi6F OFOHREQo1vtiplpfujqwV9DtjhMp2yRsqWyHVcqtQFfTA2oyOd31a9FxI A==; X-IronPort-AV: E=McAfee;i="6600,9927,10915"; a="12700294" X-IronPort-AV: E=Sophos;i="6.04,254,1695711600"; d="scan'208";a="12700294" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Dec 2023 17:01:34 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10915"; a="841655253" X-IronPort-AV: E=Sophos;i="6.04,254,1695711600"; d="scan'208";a="841655253" Received: from jbrandeb-spr1.jf.intel.com ([10.166.28.233]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Dec 2023 17:01:33 -0800 From: Jesse Brandeburg To: intel-wired-lan@lists.osuosl.org Cc: Jesse Brandeburg , netdev@vger.kernel.org, aleksander.lobakin@intel.com, przemyslaw.kitszel@intel.com, horms@kernel.org, marcin.szycik@linux.intel.com, Julia Lawall Subject: [PATCH iwl-next v2 06/15] ice: field prep conversion Date: Tue, 5 Dec 2023 17:01:05 -0800 Message-Id: <20231206010114.2259388-7-jesse.brandeburg@intel.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20231206010114.2259388-1-jesse.brandeburg@intel.com> References: <20231206010114.2259388-1-jesse.brandeburg@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Refactor ice driver to use FIELD_PREP(), which reduces lines of code and adds clarity of intent. This code was generated by the following coccinelle/spatch script and then manually repaired. Several places I changed to OR into a single variable with |= instead of using a multi-line statement with trailing OR operators, as it (subjectively) makes the code clearer. A local variable vmvf_and_timeout was created and used to avoid multiple logical ORs being __le16 converted, which shortened some lines and makes the code cleaner. Also clean up a couple of places where conversions were made to have the code read more clearly/consistently. @prep2@ constant shift,mask; type T; expression a; @@ -(((T)(a) << shift) & mask) +FIELD_PREP(mask, a) @prep@ constant shift,mask; type T; expression a; @@ -((T)((a) << shift) & mask) +FIELD_PREP(mask, a) Cc: Julia Lawall CC: Alexander Lobakin Reviewed-by: Marcin Szycik Reviewed-by: Simon Horman Signed-off-by: Jesse Brandeburg Tested-by: Pucha Himasekhar Reddy (A Contingent worker at Intel) --- v2: added a couple more preps, some code cleanups found when looking for le32_set/encode opportunities. --- drivers/net/ethernet/intel/ice/ice_base.c | 20 ++--- drivers/net/ethernet/intel/ice/ice_common.c | 35 ++++----- drivers/net/ethernet/intel/ice/ice_dcb.c | 3 +- drivers/net/ethernet/intel/ice/ice_dcb_lib.c | 2 +- drivers/net/ethernet/intel/ice/ice_eswitch.c | 4 +- drivers/net/ethernet/intel/ice/ice_fdir.c | 69 ++++++----------- .../net/ethernet/intel/ice/ice_flex_pipe.c | 8 +- drivers/net/ethernet/intel/ice/ice_flow.c | 2 +- drivers/net/ethernet/intel/ice/ice_lag.c | 7 +- drivers/net/ethernet/intel/ice/ice_lib.c | 52 +++++-------- drivers/net/ethernet/intel/ice/ice_ptp.c | 9 +-- drivers/net/ethernet/intel/ice/ice_sriov.c | 38 ++++------ drivers/net/ethernet/intel/ice/ice_switch.c | 75 +++++++++---------- drivers/net/ethernet/intel/ice/ice_txrx.c | 6 +- drivers/net/ethernet/intel/ice/ice_virtchnl.c | 8 +- .../ethernet/intel/ice/ice_virtchnl_fdir.c | 2 +- .../net/ethernet/intel/ice/ice_vsi_vlan_lib.c | 25 +++---- 17 files changed, 147 insertions(+), 218 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c index 7fa43827a3f0..3fd6e99dba23 100644 --- a/drivers/net/ethernet/intel/ice/ice_base.c +++ b/drivers/net/ethernet/intel/ice/ice_base.c @@ -234,14 +234,10 @@ static void ice_cfg_itr_gran(struct ice_hw *hw) GLINT_CTL_ITR_GRAN_25_S) == ICE_ITR_GRAN_US)) return; - regval = ((ICE_ITR_GRAN_US << GLINT_CTL_ITR_GRAN_200_S) & - GLINT_CTL_ITR_GRAN_200_M) | - ((ICE_ITR_GRAN_US << GLINT_CTL_ITR_GRAN_100_S) & - GLINT_CTL_ITR_GRAN_100_M) | - ((ICE_ITR_GRAN_US << GLINT_CTL_ITR_GRAN_50_S) & - GLINT_CTL_ITR_GRAN_50_M) | - ((ICE_ITR_GRAN_US << GLINT_CTL_ITR_GRAN_25_S) & - GLINT_CTL_ITR_GRAN_25_M); + regval = FIELD_PREP(GLINT_CTL_ITR_GRAN_200_M, ICE_ITR_GRAN_US) | + FIELD_PREP(GLINT_CTL_ITR_GRAN_100_M, ICE_ITR_GRAN_US) | + FIELD_PREP(GLINT_CTL_ITR_GRAN_50_M, ICE_ITR_GRAN_US) | + FIELD_PREP(GLINT_CTL_ITR_GRAN_25_M, ICE_ITR_GRAN_US); wr32(hw, GLINT_CTL, regval); } @@ -913,10 +909,10 @@ ice_cfg_txq_interrupt(struct ice_vsi *vsi, u16 txq, u16 msix_idx, u16 itr_idx) struct ice_hw *hw = &pf->hw; u32 val; - itr_idx = (itr_idx << QINT_TQCTL_ITR_INDX_S) & QINT_TQCTL_ITR_INDX_M; + itr_idx = FIELD_PREP(QINT_TQCTL_ITR_INDX_M, itr_idx); val = QINT_TQCTL_CAUSE_ENA_M | itr_idx | - ((msix_idx << QINT_TQCTL_MSIX_INDX_S) & QINT_TQCTL_MSIX_INDX_M); + FIELD_PREP(QINT_TQCTL_MSIX_INDX_M, msix_idx); wr32(hw, QINT_TQCTL(vsi->txq_map[txq]), val); if (ice_is_xdp_ena_vsi(vsi)) { @@ -945,10 +941,10 @@ ice_cfg_rxq_interrupt(struct ice_vsi *vsi, u16 rxq, u16 msix_idx, u16 itr_idx) struct ice_hw *hw = &pf->hw; u32 val; - itr_idx = (itr_idx << QINT_RQCTL_ITR_INDX_S) & QINT_RQCTL_ITR_INDX_M; + itr_idx = FIELD_PREP(QINT_RQCTL_ITR_INDX_M, itr_idx); val = QINT_RQCTL_CAUSE_ENA_M | itr_idx | - ((msix_idx << QINT_RQCTL_MSIX_INDX_S) & QINT_RQCTL_MSIX_INDX_M); + FIELD_PREP(QINT_RQCTL_MSIX_INDX_M, msix_idx); wr32(hw, QINT_RQCTL(vsi->rxq_map[rxq]), val); diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c index 8d97434e1413..eb5c00b83112 100644 --- a/drivers/net/ethernet/intel/ice/ice_common.c +++ b/drivers/net/ethernet/intel/ice/ice_common.c @@ -4095,6 +4095,7 @@ ice_aq_sff_eeprom(struct ice_hw *hw, u16 lport, u8 bus_addr, { struct ice_aqc_sff_eeprom *cmd; struct ice_aq_desc desc; + u16 i2c_bus_addr; int status; if (!data || (mem_addr & 0xff00)) @@ -4105,15 +4106,13 @@ ice_aq_sff_eeprom(struct ice_hw *hw, u16 lport, u8 bus_addr, desc.flags = cpu_to_le16(ICE_AQ_FLAG_RD); cmd->lport_num = (u8)(lport & 0xff); cmd->lport_num_valid = (u8)((lport >> 8) & 0x01); - cmd->i2c_bus_addr = cpu_to_le16(((bus_addr >> 1) & - ICE_AQC_SFF_I2CBUS_7BIT_M) | - ((set_page << - ICE_AQC_SFF_SET_EEPROM_PAGE_S) & - ICE_AQC_SFF_SET_EEPROM_PAGE_M)); - cmd->i2c_mem_addr = cpu_to_le16(mem_addr & 0xff); - cmd->eeprom_page = cpu_to_le16((u16)page << ICE_AQC_SFF_EEPROM_PAGE_S); + i2c_bus_addr = FIELD_PREP(ICE_AQC_SFF_I2CBUS_7BIT_M, bus_addr >> 1) | + FIELD_PREP(ICE_AQC_SFF_SET_EEPROM_PAGE_M, set_page); if (write) - cmd->i2c_bus_addr |= cpu_to_le16(ICE_AQC_SFF_IS_WRITE); + i2c_bus_addr |= ICE_AQC_SFF_IS_WRITE; + cmd->i2c_bus_addr = cpu_to_le16(i2c_bus_addr); + cmd->i2c_mem_addr = cpu_to_le16(mem_addr & 0xff); + cmd->eeprom_page = le16_encode_bits(page, ICE_AQC_SFF_EEPROM_PAGE_M); status = ice_aq_send_cmd(hw, &desc, data, length, cd); return status; @@ -4368,6 +4367,7 @@ ice_aq_dis_lan_txq(struct ice_hw *hw, u8 num_qgrps, struct ice_aqc_dis_txq_item *item; struct ice_aqc_dis_txqs *cmd; struct ice_aq_desc desc; + u16 vmvf_and_timeout; u16 i, sz = 0; int status; @@ -4383,27 +4383,26 @@ ice_aq_dis_lan_txq(struct ice_hw *hw, u8 num_qgrps, cmd->num_entries = num_qgrps; - cmd->vmvf_and_timeout = cpu_to_le16((5 << ICE_AQC_Q_DIS_TIMEOUT_S) & - ICE_AQC_Q_DIS_TIMEOUT_M); + vmvf_and_timeout = FIELD_PREP(ICE_AQC_Q_DIS_TIMEOUT_M, 5); switch (rst_src) { case ICE_VM_RESET: cmd->cmd_type = ICE_AQC_Q_DIS_CMD_VM_RESET; - cmd->vmvf_and_timeout |= - cpu_to_le16(vmvf_num & ICE_AQC_Q_DIS_VMVF_NUM_M); + vmvf_and_timeout |= vmvf_num & ICE_AQC_Q_DIS_VMVF_NUM_M; break; case ICE_VF_RESET: cmd->cmd_type = ICE_AQC_Q_DIS_CMD_VF_RESET; /* In this case, FW expects vmvf_num to be absolute VF ID */ - cmd->vmvf_and_timeout |= - cpu_to_le16((vmvf_num + hw->func_caps.vf_base_id) & - ICE_AQC_Q_DIS_VMVF_NUM_M); + vmvf_and_timeout |= (vmvf_num + hw->func_caps.vf_base_id) & + ICE_AQC_Q_DIS_VMVF_NUM_M; break; case ICE_NO_RESET: default: break; } + cmd->vmvf_and_timeout = cpu_to_le16(vmvf_and_timeout); + /* flush pipe on time out */ cmd->cmd_type |= ICE_AQC_Q_DIS_CMD_FLUSH_PIPE; /* If no queue group info, we are in a reset flow. Issue the AQ */ @@ -4478,10 +4477,8 @@ ice_aq_cfg_lan_txq(struct ice_hw *hw, struct ice_aqc_cfg_txqs_buf *buf, cmd->cmd_type = ICE_AQC_Q_CFG_TC_CHNG; cmd->num_qs = num_qs; cmd->port_num_chng = (oldport & ICE_AQC_Q_CFG_SRC_PRT_M); - cmd->port_num_chng |= (newport << ICE_AQC_Q_CFG_DST_PRT_S) & - ICE_AQC_Q_CFG_DST_PRT_M; - cmd->time_out = (5 << ICE_AQC_Q_CFG_TIMEOUT_S) & - ICE_AQC_Q_CFG_TIMEOUT_M; + cmd->port_num_chng |= FIELD_PREP(ICE_AQC_Q_CFG_DST_PRT_M, newport); + cmd->time_out = FIELD_PREP(ICE_AQC_Q_CFG_TIMEOUT_M, 5); cmd->blocked_cgds = 0; status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd); diff --git a/drivers/net/ethernet/intel/ice/ice_dcb.c b/drivers/net/ethernet/intel/ice/ice_dcb.c index 396e555023aa..41b7853291d3 100644 --- a/drivers/net/ethernet/intel/ice/ice_dcb.c +++ b/drivers/net/ethernet/intel/ice/ice_dcb.c @@ -35,8 +35,7 @@ ice_aq_get_lldp_mib(struct ice_hw *hw, u8 bridge_type, u8 mib_type, void *buf, ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_lldp_get_mib); cmd->type = mib_type & ICE_AQ_LLDP_MIB_TYPE_M; - cmd->type |= (bridge_type << ICE_AQ_LLDP_BRID_TYPE_S) & - ICE_AQ_LLDP_BRID_TYPE_M; + cmd->type |= FIELD_PREP(ICE_AQ_LLDP_BRID_TYPE_M, bridge_type); desc.datalen = cpu_to_le16(buf_size); diff --git a/drivers/net/ethernet/intel/ice/ice_dcb_lib.c b/drivers/net/ethernet/intel/ice/ice_dcb_lib.c index 850db8e0e6b0..6e20ee610022 100644 --- a/drivers/net/ethernet/intel/ice/ice_dcb_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_dcb_lib.c @@ -934,7 +934,7 @@ ice_tx_prepare_vlan_flags_dcb(struct ice_tx_ring *tx_ring, skb->priority != TC_PRIO_CONTROL) { first->vid &= ~VLAN_PRIO_MASK; /* Mask the lower 3 bits to set the 802.1p priority */ - first->vid |= (skb->priority << VLAN_PRIO_SHIFT) & VLAN_PRIO_MASK; + first->vid |= FIELD_PREP(VLAN_PRIO_MASK, skb->priority); /* if this is not already set it means a VLAN 0 + priority needs * to be offloaded */ diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch.c b/drivers/net/ethernet/intel/ice/ice_eswitch.c index 3f80e2081e5d..0aed9b5dba06 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch.c +++ b/drivers/net/ethernet/intel/ice/ice_eswitch.c @@ -358,8 +358,8 @@ ice_eswitch_set_target_vsi(struct sk_buff *skb, off->cd_qw1 |= (cd_cmd | ICE_TX_DESC_DTYPE_CTX); } else { cd_cmd = ICE_TX_CTX_DESC_SWTCH_VSI << ICE_TXD_CTX_QW1_CMD_S; - dst_vsi = ((u64)dst->u.port_info.port_id << - ICE_TXD_CTX_QW1_VSI_S) & ICE_TXD_CTX_QW1_VSI_M; + dst_vsi = FIELD_PREP(ICE_TXD_CTX_QW1_VSI_M, + dst->u.port_info.port_id); off->cd_qw1 = cd_cmd | dst_vsi | ICE_TX_DESC_DTYPE_CTX; } } diff --git a/drivers/net/ethernet/intel/ice/ice_fdir.c b/drivers/net/ethernet/intel/ice/ice_fdir.c index ae089d32ee9d..5840c3e04a5b 100644 --- a/drivers/net/ethernet/intel/ice/ice_fdir.c +++ b/drivers/net/ethernet/intel/ice/ice_fdir.c @@ -604,55 +604,32 @@ ice_set_fd_desc_val(struct ice_fd_fltr_desc_ctx *ctx, u64 qword; /* prep QW0 of FD filter programming desc */ - qword = ((u64)ctx->qindex << ICE_FXD_FLTR_QW0_QINDEX_S) & - ICE_FXD_FLTR_QW0_QINDEX_M; - qword |= ((u64)ctx->comp_q << ICE_FXD_FLTR_QW0_COMP_Q_S) & - ICE_FXD_FLTR_QW0_COMP_Q_M; - qword |= ((u64)ctx->comp_report << ICE_FXD_FLTR_QW0_COMP_REPORT_S) & - ICE_FXD_FLTR_QW0_COMP_REPORT_M; - qword |= ((u64)ctx->fd_space << ICE_FXD_FLTR_QW0_FD_SPACE_S) & - ICE_FXD_FLTR_QW0_FD_SPACE_M; - qword |= ((u64)ctx->cnt_index << ICE_FXD_FLTR_QW0_STAT_CNT_S) & - ICE_FXD_FLTR_QW0_STAT_CNT_M; - qword |= ((u64)ctx->cnt_ena << ICE_FXD_FLTR_QW0_STAT_ENA_S) & - ICE_FXD_FLTR_QW0_STAT_ENA_M; - qword |= ((u64)ctx->evict_ena << ICE_FXD_FLTR_QW0_EVICT_ENA_S) & - ICE_FXD_FLTR_QW0_EVICT_ENA_M; - qword |= ((u64)ctx->toq << ICE_FXD_FLTR_QW0_TO_Q_S) & - ICE_FXD_FLTR_QW0_TO_Q_M; - qword |= ((u64)ctx->toq_prio << ICE_FXD_FLTR_QW0_TO_Q_PRI_S) & - ICE_FXD_FLTR_QW0_TO_Q_PRI_M; - qword |= ((u64)ctx->dpu_recipe << ICE_FXD_FLTR_QW0_DPU_RECIPE_S) & - ICE_FXD_FLTR_QW0_DPU_RECIPE_M; - qword |= ((u64)ctx->drop << ICE_FXD_FLTR_QW0_DROP_S) & - ICE_FXD_FLTR_QW0_DROP_M; - qword |= ((u64)ctx->flex_prio << ICE_FXD_FLTR_QW0_FLEX_PRI_S) & - ICE_FXD_FLTR_QW0_FLEX_PRI_M; - qword |= ((u64)ctx->flex_mdid << ICE_FXD_FLTR_QW0_FLEX_MDID_S) & - ICE_FXD_FLTR_QW0_FLEX_MDID_M; - qword |= ((u64)ctx->flex_val << ICE_FXD_FLTR_QW0_FLEX_VAL_S) & - ICE_FXD_FLTR_QW0_FLEX_VAL_M; + qword = FIELD_PREP(ICE_FXD_FLTR_QW0_QINDEX_M, ctx->qindex); + qword |= FIELD_PREP(ICE_FXD_FLTR_QW0_COMP_Q_M, ctx->comp_q); + qword |= FIELD_PREP(ICE_FXD_FLTR_QW0_COMP_REPORT_M, ctx->comp_report); + qword |= FIELD_PREP(ICE_FXD_FLTR_QW0_FD_SPACE_M, ctx->fd_space); + qword |= FIELD_PREP(ICE_FXD_FLTR_QW0_STAT_CNT_M, ctx->cnt_index); + qword |= FIELD_PREP(ICE_FXD_FLTR_QW0_STAT_ENA_M, ctx->cnt_ena); + qword |= FIELD_PREP(ICE_FXD_FLTR_QW0_EVICT_ENA_M, ctx->evict_ena); + qword |= FIELD_PREP(ICE_FXD_FLTR_QW0_TO_Q_M, ctx->toq); + qword |= FIELD_PREP(ICE_FXD_FLTR_QW0_TO_Q_PRI_M, ctx->toq_prio); + qword |= FIELD_PREP(ICE_FXD_FLTR_QW0_DPU_RECIPE_M, ctx->dpu_recipe); + qword |= FIELD_PREP(ICE_FXD_FLTR_QW0_DROP_M, ctx->drop); + qword |= FIELD_PREP(ICE_FXD_FLTR_QW0_FLEX_PRI_M, ctx->flex_prio); + qword |= FIELD_PREP(ICE_FXD_FLTR_QW0_FLEX_MDID_M, ctx->flex_mdid); + qword |= FIELD_PREP(ICE_FXD_FLTR_QW0_FLEX_VAL_M, ctx->flex_val); fdir_desc->qidx_compq_space_stat = cpu_to_le64(qword); /* prep QW1 of FD filter programming desc */ - qword = ((u64)ctx->dtype << ICE_FXD_FLTR_QW1_DTYPE_S) & - ICE_FXD_FLTR_QW1_DTYPE_M; - qword |= ((u64)ctx->pcmd << ICE_FXD_FLTR_QW1_PCMD_S) & - ICE_FXD_FLTR_QW1_PCMD_M; - qword |= ((u64)ctx->desc_prof_prio << ICE_FXD_FLTR_QW1_PROF_PRI_S) & - ICE_FXD_FLTR_QW1_PROF_PRI_M; - qword |= ((u64)ctx->desc_prof << ICE_FXD_FLTR_QW1_PROF_S) & - ICE_FXD_FLTR_QW1_PROF_M; - qword |= ((u64)ctx->fd_vsi << ICE_FXD_FLTR_QW1_FD_VSI_S) & - ICE_FXD_FLTR_QW1_FD_VSI_M; - qword |= ((u64)ctx->swap << ICE_FXD_FLTR_QW1_SWAP_S) & - ICE_FXD_FLTR_QW1_SWAP_M; - qword |= ((u64)ctx->fdid_prio << ICE_FXD_FLTR_QW1_FDID_PRI_S) & - ICE_FXD_FLTR_QW1_FDID_PRI_M; - qword |= ((u64)ctx->fdid_mdid << ICE_FXD_FLTR_QW1_FDID_MDID_S) & - ICE_FXD_FLTR_QW1_FDID_MDID_M; - qword |= ((u64)ctx->fdid << ICE_FXD_FLTR_QW1_FDID_S) & - ICE_FXD_FLTR_QW1_FDID_M; + qword = FIELD_PREP(ICE_FXD_FLTR_QW1_DTYPE_M, ctx->dtype); + qword |= FIELD_PREP(ICE_FXD_FLTR_QW1_PCMD_M, ctx->pcmd); + qword |= FIELD_PREP(ICE_FXD_FLTR_QW1_PROF_PRI_M, ctx->desc_prof_prio); + qword |= FIELD_PREP(ICE_FXD_FLTR_QW1_PROF_M, ctx->desc_prof); + qword |= FIELD_PREP(ICE_FXD_FLTR_QW1_FD_VSI_M, ctx->fd_vsi); + qword |= FIELD_PREP(ICE_FXD_FLTR_QW1_SWAP_M, ctx->swap); + qword |= FIELD_PREP(ICE_FXD_FLTR_QW1_FDID_PRI_M, ctx->fdid_prio); + qword |= FIELD_PREP(ICE_FXD_FLTR_QW1_FDID_MDID_M, ctx->fdid_mdid); + qword |= FIELD_PREP(ICE_FXD_FLTR_QW1_FDID_M, ctx->fdid); fdir_desc->dtype_cmd_vsi_fdid = cpu_to_le64(qword); } diff --git a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c index 5ce413965930..b0ce58829584 100644 --- a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c +++ b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c @@ -1409,13 +1409,13 @@ ice_write_prof_mask_reg(struct ice_hw *hw, enum ice_block blk, u16 mask_idx, switch (blk) { case ICE_BLK_RSS: offset = GLQF_HMASK(mask_idx); - val = (idx << GLQF_HMASK_MSK_INDEX_S) & GLQF_HMASK_MSK_INDEX_M; - val |= (mask << GLQF_HMASK_MASK_S) & GLQF_HMASK_MASK_M; + val = FIELD_PREP(GLQF_HMASK_MSK_INDEX_M, idx); + val |= FIELD_PREP(GLQF_HMASK_MASK_M, mask); break; case ICE_BLK_FD: offset = GLQF_FDMASK(mask_idx); - val = (idx << GLQF_FDMASK_MSK_INDEX_S) & GLQF_FDMASK_MSK_INDEX_M; - val |= (mask << GLQF_FDMASK_MASK_S) & GLQF_FDMASK_MASK_M; + val = FIELD_PREP(GLQF_FDMASK_MSK_INDEX_M, idx); + val |= FIELD_PREP(GLQF_FDMASK_MASK_M, mask); break; default: ice_debug(hw, ICE_DBG_PKG, "No profile masks for block %d\n", diff --git a/drivers/net/ethernet/intel/ice/ice_flow.c b/drivers/net/ethernet/intel/ice/ice_flow.c index fb8b925aaf8b..f0c890d612ef 100644 --- a/drivers/net/ethernet/intel/ice/ice_flow.c +++ b/drivers/net/ethernet/intel/ice/ice_flow.c @@ -2035,7 +2035,7 @@ ice_add_rss_list(struct ice_hw *hw, u16 vsi_handle, struct ice_flow_prof *prof) */ #define ICE_FLOW_GEN_PROFID(hash, hdr, segs_cnt) \ ((u64)(((u64)(hash) & ICE_FLOW_PROF_HASH_M) | \ - (((u64)(hdr) << ICE_FLOW_PROF_HDR_S) & ICE_FLOW_PROF_HDR_M) | \ + FIELD_PREP(ICE_FLOW_PROF_HDR_M, hdr) | \ ((u8)((segs_cnt) - 1) ? ICE_FLOW_PROF_ENCAP_M : 0))) /** diff --git a/drivers/net/ethernet/intel/ice/ice_lag.c b/drivers/net/ethernet/intel/ice/ice_lag.c index cd065ec48c87..a7f3437e8e42 100644 --- a/drivers/net/ethernet/intel/ice/ice_lag.c +++ b/drivers/net/ethernet/intel/ice/ice_lag.c @@ -208,8 +208,7 @@ ice_lag_cfg_fltr(struct ice_lag *lag, u32 act, u16 recipe_id, u16 *rule_idx, eth_hdr = s_rule->hdr_data; ice_fill_eth_hdr(eth_hdr); - act |= (vsi_num << ICE_SINGLE_ACT_VSI_ID_S) & - ICE_SINGLE_ACT_VSI_ID_M; + act |= FIELD_PREP(ICE_SINGLE_ACT_VSI_ID_M, vsi_num); s_rule->hdr.type = cpu_to_le16(ICE_AQC_SW_RULES_T_LKUP_RX); s_rule->recipe_id = cpu_to_le16(recipe_id); @@ -711,9 +710,7 @@ ice_lag_cfg_cp_fltr(struct ice_lag *lag, bool add) s_rule->act = cpu_to_le32(ICE_FWD_TO_VSI | ICE_SINGLE_ACT_LAN_ENABLE | ICE_SINGLE_ACT_VALID_BIT | - ((vsi->vsi_num << - ICE_SINGLE_ACT_VSI_ID_S) & - ICE_SINGLE_ACT_VSI_ID_M)); + FIELD_PREP(ICE_SINGLE_ACT_VSI_ID_M, vsi->vsi_num)); s_rule->hdr_len = cpu_to_le16(ICE_LAG_SRIOV_TRAIN_PKT_LEN); memcpy(s_rule->hdr_data, lacp_train_pkt, LACP_TRAIN_PKT_LEN); opc = ice_aqc_opc_add_sw_rules; diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index d826b5afa143..394f915290f6 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -986,13 +986,11 @@ static void ice_set_dflt_vsi_ctx(struct ice_hw *hw, struct ice_vsi_ctx *ctxt) ctxt->info.inner_vlan_flags |= ICE_AQ_VSI_INNER_VLAN_EMODE_NOTHING; ctxt->info.outer_vlan_flags = - (ICE_AQ_VSI_OUTER_VLAN_TX_MODE_ALL << - ICE_AQ_VSI_OUTER_VLAN_TX_MODE_S) & - ICE_AQ_VSI_OUTER_VLAN_TX_MODE_M; + FIELD_PREP(ICE_AQ_VSI_OUTER_VLAN_TX_MODE_M, + ICE_AQ_VSI_OUTER_VLAN_TX_MODE_ALL); ctxt->info.outer_vlan_flags |= - (ICE_AQ_VSI_OUTER_TAG_VLAN_8100 << - ICE_AQ_VSI_OUTER_TAG_TYPE_S) & - ICE_AQ_VSI_OUTER_TAG_TYPE_M; + FIELD_PREP(ICE_AQ_VSI_OUTER_TAG_TYPE_M, + ICE_AQ_VSI_OUTER_TAG_VLAN_8100); ctxt->info.outer_vlan_flags |= FIELD_PREP(ICE_AQ_VSI_OUTER_VLAN_EMODE_M, ICE_AQ_VSI_OUTER_VLAN_EMODE_NOTHING); @@ -1071,10 +1069,8 @@ static int ice_vsi_setup_q_map(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt) vsi->tc_cfg.tc_info[i].qcount_tx = num_txq_per_tc; vsi->tc_cfg.tc_info[i].netdev_tc = netdev_tc++; - qmap = ((offset << ICE_AQ_VSI_TC_Q_OFFSET_S) & - ICE_AQ_VSI_TC_Q_OFFSET_M) | - ((pow << ICE_AQ_VSI_TC_Q_NUM_S) & - ICE_AQ_VSI_TC_Q_NUM_M); + qmap = FIELD_PREP(ICE_AQ_VSI_TC_Q_OFFSET_M, offset); + qmap |= FIELD_PREP(ICE_AQ_VSI_TC_Q_NUM_M, pow); offset += num_rxq_per_tc; tx_count += num_txq_per_tc; ctxt->info.tc_mapping[i] = cpu_to_le16(qmap); @@ -1157,18 +1153,14 @@ static void ice_set_fd_vsi_ctx(struct ice_vsi_ctx *ctxt, struct ice_vsi *vsi) ctxt->info.max_fd_fltr_shared = cpu_to_le16(vsi->num_bfltr); /* default queue index within the VSI of the default FD */ - val = ((dflt_q << ICE_AQ_VSI_FD_DEF_Q_S) & - ICE_AQ_VSI_FD_DEF_Q_M); + val = FIELD_PREP(ICE_AQ_VSI_FD_DEF_Q_M, dflt_q); /* target queue or queue group to the FD filter */ - val |= ((dflt_q_group << ICE_AQ_VSI_FD_DEF_GRP_S) & - ICE_AQ_VSI_FD_DEF_GRP_M); + val |= FIELD_PREP(ICE_AQ_VSI_FD_DEF_GRP_M, dflt_q_group); ctxt->info.fd_def_q = cpu_to_le16(val); /* queue index on which FD filter completion is reported */ - val = ((report_q << ICE_AQ_VSI_FD_REPORT_Q_S) & - ICE_AQ_VSI_FD_REPORT_Q_M); + val = FIELD_PREP(ICE_AQ_VSI_FD_REPORT_Q_M, report_q); /* priority of the default qindex action */ - val |= ((dflt_q_prio << ICE_AQ_VSI_FD_DEF_PRIORITY_S) & - ICE_AQ_VSI_FD_DEF_PRIORITY_M); + val |= FIELD_PREP(ICE_AQ_VSI_FD_DEF_PRIORITY_M, dflt_q_prio); ctxt->info.fd_report_opt = cpu_to_le16(val); } @@ -1204,9 +1196,9 @@ static void ice_set_rss_vsi_ctx(struct ice_vsi_ctx *ctxt, struct ice_vsi *vsi) return; } - ctxt->info.q_opt_rss = ((lut_type << ICE_AQ_VSI_Q_OPT_RSS_LUT_S) & - ICE_AQ_VSI_Q_OPT_RSS_LUT_M) | - (hash_type & ICE_AQ_VSI_Q_OPT_RSS_HASH_M); + ctxt->info.q_opt_rss = FIELD_PREP(ICE_AQ_VSI_Q_OPT_RSS_LUT_M, + lut_type); + ctxt->info.q_opt_rss |= (hash_type & ICE_AQ_VSI_Q_OPT_RSS_HASH_M); } static void @@ -1220,10 +1212,8 @@ ice_chnl_vsi_setup_q_map(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt) qcount = min_t(int, vsi->num_rxq, pf->num_lan_msix); pow = order_base_2(qcount); - qmap = ((offset << ICE_AQ_VSI_TC_Q_OFFSET_S) & - ICE_AQ_VSI_TC_Q_OFFSET_M) | - ((pow << ICE_AQ_VSI_TC_Q_NUM_S) & - ICE_AQ_VSI_TC_Q_NUM_M); + qmap = FIELD_PREP(ICE_AQ_VSI_TC_Q_OFFSET_M, offset); + qmap |= FIELD_PREP(ICE_AQ_VSI_TC_Q_NUM_M, pow); ctxt->info.tc_mapping[0] = cpu_to_le16(qmap); ctxt->info.mapping_flags |= cpu_to_le16(ICE_AQ_VSI_Q_MAP_CONTIG); @@ -1813,11 +1803,8 @@ ice_write_qrxflxp_cntxt(struct ice_hw *hw, u16 pf_q, u32 rxdid, u32 prio, QRXFLXP_CNTXT_RXDID_PRIO_M | QRXFLXP_CNTXT_TS_M); - regval |= (rxdid << QRXFLXP_CNTXT_RXDID_IDX_S) & - QRXFLXP_CNTXT_RXDID_IDX_M; - - regval |= (prio << QRXFLXP_CNTXT_RXDID_PRIO_S) & - QRXFLXP_CNTXT_RXDID_PRIO_M; + regval |= FIELD_PREP(QRXFLXP_CNTXT_RXDID_IDX_M, rxdid); + regval |= FIELD_PREP(QRXFLXP_CNTXT_RXDID_PRIO_M, prio); if (ena_ts) /* Enable TimeSync on this queue */ @@ -3341,9 +3328,8 @@ ice_vsi_setup_q_map_mqprio(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt, vsi->tc_cfg.ena_tc = ena_tc ? ena_tc : 1; pow = order_base_2(tc0_qcount); - qmap = ((tc0_offset << ICE_AQ_VSI_TC_Q_OFFSET_S) & - ICE_AQ_VSI_TC_Q_OFFSET_M) | - ((pow << ICE_AQ_VSI_TC_Q_NUM_S) & ICE_AQ_VSI_TC_Q_NUM_M); + qmap = FIELD_PREP(ICE_AQ_VSI_TC_Q_OFFSET_M, tc0_offset); + qmap |= FIELD_PREP(ICE_AQ_VSI_TC_Q_NUM_M, pow); ice_for_each_traffic_class(i) { if (!(vsi->tc_cfg.ena_tc & BIT(i))) { diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.c b/drivers/net/ethernet/intel/ice/ice_ptp.c index 03fc9c7cd21a..42458554d2a7 100644 --- a/drivers/net/ethernet/intel/ice/ice_ptp.c +++ b/drivers/net/ethernet/intel/ice/ice_ptp.c @@ -1359,8 +1359,8 @@ static int ice_ptp_tx_ena_intr(struct ice_pf *pf, bool ena, u32 threshold) if (ena) { val |= Q_REG_TX_MEM_GBL_CFG_INTR_ENA_M; val &= ~Q_REG_TX_MEM_GBL_CFG_INTR_THR_M; - val |= ((threshold << Q_REG_TX_MEM_GBL_CFG_INTR_THR_S) & - Q_REG_TX_MEM_GBL_CFG_INTR_THR_M); + val |= FIELD_PREP(Q_REG_TX_MEM_GBL_CFG_INTR_THR_M, + threshold); } else { val &= ~Q_REG_TX_MEM_GBL_CFG_INTR_ENA_M; } @@ -1505,8 +1505,7 @@ ice_ptp_cfg_extts(struct ice_pf *pf, bool ena, unsigned int chan, u32 gpio_pin, * + num_in_channels * tmr_idx */ func = 1 + chan + (tmr_idx * 3); - gpio_reg = ((func << GLGEN_GPIO_CTL_PIN_FUNC_S) & - GLGEN_GPIO_CTL_PIN_FUNC_M); + gpio_reg = FIELD_PREP(GLGEN_GPIO_CTL_PIN_FUNC_M, func); pf->ptp.ext_ts_chan |= (1 << chan); } else { /* clear the values we set to reset defaults */ @@ -1616,7 +1615,7 @@ static int ice_ptp_cfg_clkout(struct ice_pf *pf, unsigned int chan, /* 4. write GPIO CTL reg */ func = 8 + chan + (tmr_idx * 4); val = GLGEN_GPIO_CTL_PIN_DIR_M | - ((func << GLGEN_GPIO_CTL_PIN_FUNC_S) & GLGEN_GPIO_CTL_PIN_FUNC_M); + FIELD_PREP(GLGEN_GPIO_CTL_PIN_FUNC_M, func); wr32(hw, GLGEN_GPIO_CTL(gpio_pin), val); /* Store the value if requested */ diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.c b/drivers/net/ethernet/intel/ice/ice_sriov.c index 6d33dd647c78..54d602388c9c 100644 --- a/drivers/net/ethernet/intel/ice/ice_sriov.c +++ b/drivers/net/ethernet/intel/ice/ice_sriov.c @@ -106,10 +106,8 @@ static void ice_dis_vf_mappings(struct ice_vf *vf) for (v = first; v <= last; v++) { u32 reg; - reg = (((1 << GLINT_VECT2FUNC_IS_PF_S) & - GLINT_VECT2FUNC_IS_PF_M) | - ((hw->pf_id << GLINT_VECT2FUNC_PF_NUM_S) & - GLINT_VECT2FUNC_PF_NUM_M)); + reg = FIELD_PREP(GLINT_VECT2FUNC_IS_PF_M, 1) | + FIELD_PREP(GLINT_VECT2FUNC_PF_NUM_M, hw->pf_id); wr32(hw, GLINT_VECT2FUNC(v), reg); } @@ -275,24 +273,20 @@ static void ice_ena_vf_msix_mappings(struct ice_vf *vf) (device_based_first_msix + vf->num_msix) - 1; device_based_vf_id = vf->vf_id + hw->func_caps.vf_base_id; - reg = (((device_based_first_msix << VPINT_ALLOC_FIRST_S) & - VPINT_ALLOC_FIRST_M) | - ((device_based_last_msix << VPINT_ALLOC_LAST_S) & - VPINT_ALLOC_LAST_M) | VPINT_ALLOC_VALID_M); + reg = FIELD_PREP(VPINT_ALLOC_FIRST_M, device_based_first_msix) | + FIELD_PREP(VPINT_ALLOC_LAST_M, device_based_last_msix) | + VPINT_ALLOC_VALID_M; wr32(hw, VPINT_ALLOC(vf->vf_id), reg); - reg = (((device_based_first_msix << VPINT_ALLOC_PCI_FIRST_S) - & VPINT_ALLOC_PCI_FIRST_M) | - ((device_based_last_msix << VPINT_ALLOC_PCI_LAST_S) & - VPINT_ALLOC_PCI_LAST_M) | VPINT_ALLOC_PCI_VALID_M); + reg = FIELD_PREP(VPINT_ALLOC_PCI_FIRST_M, device_based_first_msix) | + FIELD_PREP(VPINT_ALLOC_PCI_LAST_M, device_based_last_msix) | + VPINT_ALLOC_PCI_VALID_M; wr32(hw, VPINT_ALLOC_PCI(vf->vf_id), reg); /* map the interrupts to its functions */ for (v = pf_based_first_msix; v <= pf_based_last_msix; v++) { - reg = (((device_based_vf_id << GLINT_VECT2FUNC_VF_NUM_S) & - GLINT_VECT2FUNC_VF_NUM_M) | - ((hw->pf_id << GLINT_VECT2FUNC_PF_NUM_S) & - GLINT_VECT2FUNC_PF_NUM_M)); + reg = FIELD_PREP(GLINT_VECT2FUNC_VF_NUM_M, device_based_vf_id) | + FIELD_PREP(GLINT_VECT2FUNC_PF_NUM_M, hw->pf_id); wr32(hw, GLINT_VECT2FUNC(v), reg); } @@ -325,10 +319,8 @@ static void ice_ena_vf_q_mappings(struct ice_vf *vf, u16 max_txq, u16 max_rxq) * VFNUMQ value should be set to (number of queues - 1). A value * of 0 means 1 queue and a value of 255 means 256 queues */ - reg = (((vsi->txq_map[0] << VPLAN_TX_QBASE_VFFIRSTQ_S) & - VPLAN_TX_QBASE_VFFIRSTQ_M) | - (((max_txq - 1) << VPLAN_TX_QBASE_VFNUMQ_S) & - VPLAN_TX_QBASE_VFNUMQ_M)); + reg = FIELD_PREP(VPLAN_TX_QBASE_VFFIRSTQ_M, vsi->txq_map[0]) | + FIELD_PREP(VPLAN_TX_QBASE_VFNUMQ_M, max_txq - 1); wr32(hw, VPLAN_TX_QBASE(vf->vf_id), reg); } else { dev_err(dev, "Scattered mode for VF Tx queues is not yet implemented\n"); @@ -343,10 +335,8 @@ static void ice_ena_vf_q_mappings(struct ice_vf *vf, u16 max_txq, u16 max_rxq) * VFNUMQ value should be set to (number of queues - 1). A value * of 0 means 1 queue and a value of 255 means 256 queues */ - reg = (((vsi->rxq_map[0] << VPLAN_RX_QBASE_VFFIRSTQ_S) & - VPLAN_RX_QBASE_VFFIRSTQ_M) | - (((max_rxq - 1) << VPLAN_RX_QBASE_VFNUMQ_S) & - VPLAN_RX_QBASE_VFNUMQ_M)); + reg = FIELD_PREP(VPLAN_RX_QBASE_VFFIRSTQ_M, vsi->rxq_map[0]) | + FIELD_PREP(VPLAN_RX_QBASE_VFNUMQ_M, max_rxq - 1); wr32(hw, VPLAN_RX_QBASE(vf->vf_id), reg); } else { dev_err(dev, "Scattered mode for VF Rx queues is not yet implemented\n"); diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c index ee19f3aa3d19..dc5b34ca2d4a 100644 --- a/drivers/net/ethernet/intel/ice/ice_switch.c +++ b/drivers/net/ethernet/intel/ice/ice_switch.c @@ -2492,25 +2492,24 @@ ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info, switch (f_info->fltr_act) { case ICE_FWD_TO_VSI: - act |= (f_info->fwd_id.hw_vsi_id << ICE_SINGLE_ACT_VSI_ID_S) & - ICE_SINGLE_ACT_VSI_ID_M; + act |= FIELD_PREP(ICE_SINGLE_ACT_VSI_ID_M, + f_info->fwd_id.hw_vsi_id); if (f_info->lkup_type != ICE_SW_LKUP_VLAN) act |= ICE_SINGLE_ACT_VSI_FORWARDING | ICE_SINGLE_ACT_VALID_BIT; break; case ICE_FWD_TO_VSI_LIST: act |= ICE_SINGLE_ACT_VSI_LIST; - act |= (f_info->fwd_id.vsi_list_id << - ICE_SINGLE_ACT_VSI_LIST_ID_S) & - ICE_SINGLE_ACT_VSI_LIST_ID_M; + act |= FIELD_PREP(ICE_SINGLE_ACT_VSI_LIST_ID_M, + f_info->fwd_id.vsi_list_id); if (f_info->lkup_type != ICE_SW_LKUP_VLAN) act |= ICE_SINGLE_ACT_VSI_FORWARDING | ICE_SINGLE_ACT_VALID_BIT; break; case ICE_FWD_TO_Q: act |= ICE_SINGLE_ACT_TO_Q; - act |= (f_info->fwd_id.q_id << ICE_SINGLE_ACT_Q_INDEX_S) & - ICE_SINGLE_ACT_Q_INDEX_M; + act |= FIELD_PREP(ICE_SINGLE_ACT_Q_INDEX_M, + f_info->fwd_id.q_id); break; case ICE_DROP_PACKET: act |= ICE_SINGLE_ACT_VSI_FORWARDING | ICE_SINGLE_ACT_DROP | @@ -2520,10 +2519,9 @@ ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info, q_rgn = f_info->qgrp_size > 0 ? (u8)ilog2(f_info->qgrp_size) : 0; act |= ICE_SINGLE_ACT_TO_Q; - act |= (f_info->fwd_id.q_id << ICE_SINGLE_ACT_Q_INDEX_S) & - ICE_SINGLE_ACT_Q_INDEX_M; - act |= (q_rgn << ICE_SINGLE_ACT_Q_REGION_S) & - ICE_SINGLE_ACT_Q_REGION_M; + act |= FIELD_PREP(ICE_SINGLE_ACT_Q_INDEX_M, + f_info->fwd_id.q_id); + act |= FIELD_PREP(ICE_SINGLE_ACT_Q_REGION_M, q_rgn); break; default: return; @@ -2649,7 +2647,7 @@ ice_add_marker_act(struct ice_hw *hw, struct ice_fltr_mgmt_list_entry *m_ent, m_ent->fltr_info.fwd_id.hw_vsi_id; act = ICE_LG_ACT_VSI_FORWARDING | ICE_LG_ACT_VALID_BIT; - act |= (id << ICE_LG_ACT_VSI_LIST_ID_S) & ICE_LG_ACT_VSI_LIST_ID_M; + act |= FIELD_PREP(ICE_LG_ACT_VSI_LIST_ID_M, id); if (m_ent->vsi_count > 1) act |= ICE_LG_ACT_VSI_LIST; lg_act->act[0] = cpu_to_le32(act); @@ -2657,16 +2655,15 @@ ice_add_marker_act(struct ice_hw *hw, struct ice_fltr_mgmt_list_entry *m_ent, /* Second action descriptor type */ act = ICE_LG_ACT_GENERIC; - act |= (1 << ICE_LG_ACT_GENERIC_VALUE_S) & ICE_LG_ACT_GENERIC_VALUE_M; + act |= FIELD_PREP(ICE_LG_ACT_GENERIC_VALUE_M, 1); lg_act->act[1] = cpu_to_le32(act); - act = (ICE_LG_ACT_GENERIC_OFF_RX_DESC_PROF_IDX << - ICE_LG_ACT_GENERIC_OFFSET_S) & ICE_LG_ACT_GENERIC_OFFSET_M; + act = FIELD_PREP(ICE_LG_ACT_GENERIC_OFFSET_M, + ICE_LG_ACT_GENERIC_OFF_RX_DESC_PROF_IDX); /* Third action Marker value */ act |= ICE_LG_ACT_GENERIC; - act |= (sw_marker << ICE_LG_ACT_GENERIC_VALUE_S) & - ICE_LG_ACT_GENERIC_VALUE_M; + act |= FIELD_PREP(ICE_LG_ACT_GENERIC_VALUE_M, sw_marker); lg_act->act[2] = cpu_to_le32(act); @@ -2675,9 +2672,9 @@ ice_add_marker_act(struct ice_hw *hw, struct ice_fltr_mgmt_list_entry *m_ent, ice_aqc_opc_update_sw_rules); /* Update the action to point to the large action ID */ - rx_tx->act = cpu_to_le32(ICE_SINGLE_ACT_PTR | - ((l_id << ICE_SINGLE_ACT_PTR_VAL_S) & - ICE_SINGLE_ACT_PTR_VAL_M)); + act = ICE_SINGLE_ACT_PTR; + act |= FIELD_PREP(ICE_SINGLE_ACT_PTR_VAL_M, l_id); + rx_tx->act = cpu_to_le32(act); /* Use the filter rule ID of the previously created rule with single * act. Once the update happens, hardware will treat this as large @@ -4426,8 +4423,8 @@ ice_alloc_res_cntr(struct ice_hw *hw, u8 type, u8 alloc_shared, u16 num_items, int status; buf->num_elems = cpu_to_le16(num_items); - buf->res_type = cpu_to_le16(((type << ICE_AQC_RES_TYPE_S) & - ICE_AQC_RES_TYPE_M) | alloc_shared); + buf->res_type = cpu_to_le16(FIELD_PREP(ICE_AQC_RES_TYPE_M, type) | + alloc_shared); status = ice_aq_alloc_free_res(hw, buf, buf_len, ice_aqc_opc_alloc_res); if (status) @@ -4454,8 +4451,8 @@ ice_free_res_cntr(struct ice_hw *hw, u8 type, u8 alloc_shared, u16 num_items, int status; buf->num_elems = cpu_to_le16(num_items); - buf->res_type = cpu_to_le16(((type << ICE_AQC_RES_TYPE_S) & - ICE_AQC_RES_TYPE_M) | alloc_shared); + buf->res_type = cpu_to_le16(FIELD_PREP(ICE_AQC_RES_TYPE_M, type) | + alloc_shared); buf->elem[0].e.sw_resp = cpu_to_le16(counter_id); status = ice_aq_alloc_free_res(hw, buf, buf_len, ice_aqc_opc_free_res); @@ -4481,18 +4478,15 @@ int ice_share_res(struct ice_hw *hw, u16 type, u8 shared, u16 res_id) { DEFINE_FLEX(struct ice_aqc_alloc_free_res_elem, buf, elem, 1); u16 buf_len = __struct_size(buf); + u16 res_type; int status; buf->num_elems = cpu_to_le16(1); + res_type = FIELD_PREP(ICE_AQC_RES_TYPE_M, type); if (shared) - buf->res_type = cpu_to_le16(((type << ICE_AQC_RES_TYPE_S) & - ICE_AQC_RES_TYPE_M) | - ICE_AQC_RES_TYPE_FLAG_SHARED); - else - buf->res_type = cpu_to_le16(((type << ICE_AQC_RES_TYPE_S) & - ICE_AQC_RES_TYPE_M) & - ~ICE_AQC_RES_TYPE_FLAG_SHARED); + res_type |= ICE_AQC_RES_TYPE_FLAG_SHARED; + buf->res_type = cpu_to_le16(res_type); buf->elem[0].e.sw_resp = cpu_to_le16(res_id); status = ice_aq_alloc_free_res(hw, buf, buf_len, ice_aqc_opc_share_res); @@ -5024,8 +5018,8 @@ ice_add_sw_recipe(struct ice_hw *hw, struct ice_sw_recipe *rm, entry->chain_idx = chain_idx; content->result_indx = ICE_AQ_RECIPE_RESULT_EN | - ((chain_idx << ICE_AQ_RECIPE_RESULT_DATA_S) & - ICE_AQ_RECIPE_RESULT_DATA_M); + FIELD_PREP(ICE_AQ_RECIPE_RESULT_DATA_M, + chain_idx); clear_bit(chain_idx, result_idx_bm); chain_idx = find_first_bit(result_idx_bm, ICE_MAX_FV_WORDS); @@ -6125,23 +6119,22 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, switch (rinfo->sw_act.fltr_act) { case ICE_FWD_TO_VSI: - act |= (rinfo->sw_act.fwd_id.hw_vsi_id << - ICE_SINGLE_ACT_VSI_ID_S) & ICE_SINGLE_ACT_VSI_ID_M; + act |= FIELD_PREP(ICE_SINGLE_ACT_VSI_ID_M, + rinfo->sw_act.fwd_id.hw_vsi_id); act |= ICE_SINGLE_ACT_VSI_FORWARDING | ICE_SINGLE_ACT_VALID_BIT; break; case ICE_FWD_TO_Q: act |= ICE_SINGLE_ACT_TO_Q; - act |= (rinfo->sw_act.fwd_id.q_id << ICE_SINGLE_ACT_Q_INDEX_S) & - ICE_SINGLE_ACT_Q_INDEX_M; + act |= FIELD_PREP(ICE_SINGLE_ACT_Q_INDEX_M, + rinfo->sw_act.fwd_id.q_id); break; case ICE_FWD_TO_QGRP: q_rgn = rinfo->sw_act.qgrp_size > 0 ? (u8)ilog2(rinfo->sw_act.qgrp_size) : 0; act |= ICE_SINGLE_ACT_TO_Q; - act |= (rinfo->sw_act.fwd_id.q_id << ICE_SINGLE_ACT_Q_INDEX_S) & - ICE_SINGLE_ACT_Q_INDEX_M; - act |= (q_rgn << ICE_SINGLE_ACT_Q_REGION_S) & - ICE_SINGLE_ACT_Q_REGION_M; + act |= FIELD_PREP(ICE_SINGLE_ACT_Q_INDEX_M, + rinfo->sw_act.fwd_id.q_id); + act |= FIELD_PREP(ICE_SINGLE_ACT_Q_REGION_M, q_rgn); break; case ICE_DROP_PACKET: act |= ICE_SINGLE_ACT_VSI_FORWARDING | ICE_SINGLE_ACT_DROP | diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c index 9e97ea863068..79d986273a9f 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c @@ -1494,9 +1494,9 @@ static void ice_set_wb_on_itr(struct ice_q_vector *q_vector) * be static in non-adaptive mode (user configured) */ wr32(&vsi->back->hw, GLINT_DYN_CTL(q_vector->reg_idx), - ((ICE_ITR_NONE << GLINT_DYN_CTL_ITR_INDX_S) & - GLINT_DYN_CTL_ITR_INDX_M) | GLINT_DYN_CTL_INTENA_MSK_M | - GLINT_DYN_CTL_WB_ON_ITR_M); + FIELD_PREP(GLINT_DYN_CTL_ITR_INDX_M, ICE_ITR_NONE) | + FIELD_PREP(GLINT_DYN_CTL_INTENA_MSK_M, 1) | + FIELD_PREP(GLINT_DYN_CTL_WB_ON_ITR_M, 1)); q_vector->wb_on_itr = true; } diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl.c b/drivers/net/ethernet/intel/ice/ice_virtchnl.c index 5261ba802c36..8bec83965e50 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl.c @@ -832,11 +832,9 @@ static int ice_vc_handle_rss_cfg(struct ice_vf *vf, u8 *msg, bool add) goto error_param; } - ctx->info.q_opt_rss = ((lut_type << - ICE_AQ_VSI_Q_OPT_RSS_LUT_S) & - ICE_AQ_VSI_Q_OPT_RSS_LUT_M) | - (hash_type & - ICE_AQ_VSI_Q_OPT_RSS_HASH_M); + ctx->info.q_opt_rss = FIELD_PREP(ICE_AQ_VSI_Q_OPT_RSS_LUT_M, + lut_type) | + (hash_type & ICE_AQ_VSI_Q_OPT_RSS_HASH_M); /* Preserve existing queueing option setting */ ctx->info.q_opt_rss |= (vsi->info.q_opt_rss & diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c index 24b23b7ef04a..e62104f895a1 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c @@ -21,7 +21,7 @@ */ #define ICE_FLOW_PROF_FD(vsi, flow, tun_offs) \ ((u64)(((((flow) + (tun_offs)) & ICE_FLOW_PROF_TYPE_M)) | \ - (((u64)(vsi) << ICE_FLOW_PROF_VSI_S) & ICE_FLOW_PROF_VSI_M))) + FIELD_PREP(ICE_FLOW_PROF_VSI_M, vsi))) #define GTPU_TEID_OFFSET 4 #define GTPU_EH_QFI_OFFSET 1 diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c index 76266e709a39..699bcb1e2f3a 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c @@ -481,10 +481,11 @@ int ice_vsi_ena_outer_stripping(struct ice_vsi *vsi, u16 tpid) ctxt->info.outer_vlan_flags = vsi->info.outer_vlan_flags & ~(ICE_AQ_VSI_OUTER_VLAN_EMODE_M | ICE_AQ_VSI_OUTER_TAG_TYPE_M); ctxt->info.outer_vlan_flags |= - ((ICE_AQ_VSI_OUTER_VLAN_EMODE_SHOW_BOTH << - ICE_AQ_VSI_OUTER_VLAN_EMODE_S) | - ((tag_type << ICE_AQ_VSI_OUTER_TAG_TYPE_S) & - ICE_AQ_VSI_OUTER_TAG_TYPE_M)); + /* we want EMODE_SHOW_BOTH, but that value is zero, so the line + * above clears it well enough that we don't need to try to set + * zero here, so just do the tag type + */ + FIELD_PREP(ICE_AQ_VSI_OUTER_TAG_TYPE_M, tag_type); err = ice_update_vsi(hw, vsi->idx, ctxt, NULL); if (err) @@ -589,11 +590,9 @@ int ice_vsi_ena_outer_insertion(struct ice_vsi *vsi, u16 tpid) ICE_AQ_VSI_OUTER_VLAN_TX_MODE_M | ICE_AQ_VSI_OUTER_TAG_TYPE_M); ctxt->info.outer_vlan_flags |= - ((ICE_AQ_VSI_OUTER_VLAN_TX_MODE_ALL << - ICE_AQ_VSI_OUTER_VLAN_TX_MODE_S) & - ICE_AQ_VSI_OUTER_VLAN_TX_MODE_M) | - ((tag_type << ICE_AQ_VSI_OUTER_TAG_TYPE_S) & - ICE_AQ_VSI_OUTER_TAG_TYPE_M); + FIELD_PREP(ICE_AQ_VSI_OUTER_VLAN_TX_MODE_M, + ICE_AQ_VSI_OUTER_VLAN_TX_MODE_ALL) | + FIELD_PREP(ICE_AQ_VSI_OUTER_TAG_TYPE_M, tag_type); err = ice_update_vsi(hw, vsi->idx, ctxt, NULL); if (err) @@ -642,9 +641,8 @@ int ice_vsi_dis_outer_insertion(struct ice_vsi *vsi) ICE_AQ_VSI_OUTER_VLAN_TX_MODE_M); ctxt->info.outer_vlan_flags |= ICE_AQ_VSI_OUTER_VLAN_BLOCK_TX_DESC | - ((ICE_AQ_VSI_OUTER_VLAN_TX_MODE_ALL << - ICE_AQ_VSI_OUTER_VLAN_TX_MODE_S) & - ICE_AQ_VSI_OUTER_VLAN_TX_MODE_M); + FIELD_PREP(ICE_AQ_VSI_OUTER_VLAN_TX_MODE_M, + ICE_AQ_VSI_OUTER_VLAN_TX_MODE_ALL); err = ice_update_vsi(hw, vsi->idx, ctxt, NULL); if (err) @@ -702,8 +700,7 @@ __ice_vsi_set_outer_port_vlan(struct ice_vsi *vsi, u16 vlan_info, u16 tpid) ctxt->info.outer_vlan_flags = (ICE_AQ_VSI_OUTER_VLAN_EMODE_SHOW << ICE_AQ_VSI_OUTER_VLAN_EMODE_S) | - ((tag_type << ICE_AQ_VSI_OUTER_TAG_TYPE_S) & - ICE_AQ_VSI_OUTER_TAG_TYPE_M) | + FIELD_PREP(ICE_AQ_VSI_OUTER_TAG_TYPE_M, tag_type) | ICE_AQ_VSI_OUTER_VLAN_BLOCK_TX_DESC | (ICE_AQ_VSI_OUTER_VLAN_TX_MODE_ACCEPTUNTAGGED << ICE_AQ_VSI_OUTER_VLAN_TX_MODE_S) | From patchwork Wed Dec 6 01:01:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jesse Brandeburg X-Patchwork-Id: 13480914 X-Patchwork-Delegate: kuba@kernel.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="ATTRzLux" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 70BA1C6 for ; Tue, 5 Dec 2023 17:01:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1701824498; x=1733360498; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=e+QFVfyXGcOfgMRqYt0Z+hr05kpCo5q2ht81uNlADsI=; b=ATTRzLuxPIKuNo+y8G/fwjWm9bS/8N1YI0MnsQIJQDaapZmXr0lqrh4b /fXMqUYzgn8DFUkXjUbPf+1mF6mvweBLpOfZEepM0P47sNi2SwL2c+MMg kQLL7rZGspzSPp3fl8vdiUEmdDRUFlhoJ6V0PcVpL4iAWRije8qHHyVL2 +oi/DgKbqcOkuA0ofWHhdIfT0qdcOUvooJiZkHFlYnBwosvuqszmswp/Y wI37bNkIxOv4YEp4zQri0qM9IsyX2+BmfuQuK73b2bce4QLZa2BamgxWe ndAsYU5VzgUJ1wEyiCINNlL53hvu2vU349iIgO280xD8JWHIee5nOTtdp A==; X-IronPort-AV: E=McAfee;i="6600,9927,10915"; a="12700298" X-IronPort-AV: E=Sophos;i="6.04,254,1695711600"; d="scan'208";a="12700298" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Dec 2023 17:01:34 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10915"; a="841655258" X-IronPort-AV: E=Sophos;i="6.04,254,1695711600"; d="scan'208";a="841655258" Received: from jbrandeb-spr1.jf.intel.com ([10.166.28.233]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Dec 2023 17:01:33 -0800 From: Jesse Brandeburg To: intel-wired-lan@lists.osuosl.org Cc: Jesse Brandeburg , netdev@vger.kernel.org, aleksander.lobakin@intel.com, przemyslaw.kitszel@intel.com, horms@kernel.org, marcin.szycik@linux.intel.com Subject: [PATCH iwl-next v2 07/15] ice: fix pre-shifted bit usage Date: Tue, 5 Dec 2023 17:01:06 -0800 Message-Id: <20231206010114.2259388-8-jesse.brandeburg@intel.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20231206010114.2259388-1-jesse.brandeburg@intel.com> References: <20231206010114.2259388-1-jesse.brandeburg@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org While converting to FIELD_PREP() and FIELD_GET(), it was noticed that some of the RSS defines had *included* the shift in their definitions. This is completely outside of normal, such that a developer could easily make a mistake and shift at the usage site (like when using FIELD_PREP()). Rename the defines and set them to the "pre-shifted values" so they match the template the driver normally uses for masks and the member bits of the mask, which also allows the driver to use FIELD_PREP correctly with these values. Use GENMASK() for this changed MASK value. Do the same for the VLAN EMODE defines as well. Reviewed-by: Marcin Szycik Reviewed-by: Simon Horman Signed-off-by: Jesse Brandeburg Tested-by: Pucha Himasekhar Reddy (A Contingent worker at Intel) --- v2: no change --- .../net/ethernet/intel/ice/ice_adminq_cmd.h | 18 +++++++++--------- drivers/net/ethernet/intel/ice/ice_lib.c | 7 ++++--- drivers/net/ethernet/intel/ice/ice_virtchnl.c | 10 +++++----- .../net/ethernet/intel/ice/ice_vsi_vlan_lib.c | 16 +++++++++++----- 4 files changed, 29 insertions(+), 22 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h index f77a3c70f262..51c241ab6b8e 100644 --- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h +++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h @@ -422,10 +422,10 @@ struct ice_aqc_vsi_props { #define ICE_AQ_VSI_INNER_VLAN_INSERT_PVID BIT(2) #define ICE_AQ_VSI_INNER_VLAN_EMODE_S 3 #define ICE_AQ_VSI_INNER_VLAN_EMODE_M (0x3 << ICE_AQ_VSI_INNER_VLAN_EMODE_S) -#define ICE_AQ_VSI_INNER_VLAN_EMODE_STR_BOTH (0x0 << ICE_AQ_VSI_INNER_VLAN_EMODE_S) -#define ICE_AQ_VSI_INNER_VLAN_EMODE_STR_UP (0x1 << ICE_AQ_VSI_INNER_VLAN_EMODE_S) -#define ICE_AQ_VSI_INNER_VLAN_EMODE_STR (0x2 << ICE_AQ_VSI_INNER_VLAN_EMODE_S) -#define ICE_AQ_VSI_INNER_VLAN_EMODE_NOTHING (0x3 << ICE_AQ_VSI_INNER_VLAN_EMODE_S) +#define ICE_AQ_VSI_INNER_VLAN_EMODE_STR_BOTH 0x0U +#define ICE_AQ_VSI_INNER_VLAN_EMODE_STR_UP 0x1U +#define ICE_AQ_VSI_INNER_VLAN_EMODE_STR 0x2U +#define ICE_AQ_VSI_INNER_VLAN_EMODE_NOTHING 0x3U u8 inner_vlan_reserved2[3]; /* ingress egress up sections */ __le32 ingress_table; /* bitmap, 3 bits per up */ @@ -491,11 +491,11 @@ struct ice_aqc_vsi_props { #define ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_S 2 #define ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_M (0xF << ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_S) #define ICE_AQ_VSI_Q_OPT_RSS_HASH_S 6 -#define ICE_AQ_VSI_Q_OPT_RSS_HASH_M (0x3 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S) -#define ICE_AQ_VSI_Q_OPT_RSS_TPLZ (0x0 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S) -#define ICE_AQ_VSI_Q_OPT_RSS_SYM_TPLZ (0x1 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S) -#define ICE_AQ_VSI_Q_OPT_RSS_XOR (0x2 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S) -#define ICE_AQ_VSI_Q_OPT_RSS_JHASH (0x3 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S) +#define ICE_AQ_VSI_Q_OPT_RSS_HASH_M GENMASK(7, 6) +#define ICE_AQ_VSI_Q_OPT_RSS_HASH_TPLZ 0x0U +#define ICE_AQ_VSI_Q_OPT_RSS_HASH_SYM_TPLZ 0x1U +#define ICE_AQ_VSI_Q_OPT_RSS_HASH_XOR 0x2U +#define ICE_AQ_VSI_Q_OPT_RSS_HASH_JHASH 0x3U u8 q_opt_tc; #define ICE_AQ_VSI_Q_OPT_TC_OVR_S 0 #define ICE_AQ_VSI_Q_OPT_TC_OVR_M (0x1F << ICE_AQ_VSI_Q_OPT_TC_OVR_S) diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index 394f915290f6..453eba59abb2 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -984,7 +984,8 @@ static void ice_set_dflt_vsi_ctx(struct ice_hw *hw, struct ice_vsi_ctx *ctxt) */ if (ice_is_dvm_ena(hw)) { ctxt->info.inner_vlan_flags |= - ICE_AQ_VSI_INNER_VLAN_EMODE_NOTHING; + FIELD_PREP(ICE_AQ_VSI_INNER_VLAN_EMODE_M, + ICE_AQ_VSI_INNER_VLAN_EMODE_NOTHING); ctxt->info.outer_vlan_flags = FIELD_PREP(ICE_AQ_VSI_OUTER_VLAN_TX_MODE_M, ICE_AQ_VSI_OUTER_VLAN_TX_MODE_ALL); @@ -1183,12 +1184,12 @@ static void ice_set_rss_vsi_ctx(struct ice_vsi_ctx *ctxt, struct ice_vsi *vsi) case ICE_VSI_PF: /* PF VSI will inherit RSS instance of PF */ lut_type = ICE_AQ_VSI_Q_OPT_RSS_LUT_PF; - hash_type = ICE_AQ_VSI_Q_OPT_RSS_TPLZ; + hash_type = ICE_AQ_VSI_Q_OPT_RSS_HASH_TPLZ; break; case ICE_VSI_VF: /* VF VSI will gets a small RSS table which is a VSI LUT type */ lut_type = ICE_AQ_VSI_Q_OPT_RSS_LUT_VSI; - hash_type = ICE_AQ_VSI_Q_OPT_RSS_TPLZ; + hash_type = ICE_AQ_VSI_Q_OPT_RSS_HASH_TPLZ; break; default: dev_dbg(dev, "Unsupported VSI type %s\n", diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl.c b/drivers/net/ethernet/intel/ice/ice_virtchnl.c index 8bec83965e50..727aebe24b92 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl.c @@ -823,8 +823,8 @@ static int ice_vc_handle_rss_cfg(struct ice_vf *vf, u8 *msg, bool add) int status; lut_type = ICE_AQ_VSI_Q_OPT_RSS_LUT_VSI; - hash_type = add ? ICE_AQ_VSI_Q_OPT_RSS_XOR : - ICE_AQ_VSI_Q_OPT_RSS_TPLZ; + hash_type = add ? ICE_AQ_VSI_Q_OPT_RSS_HASH_XOR : + ICE_AQ_VSI_Q_OPT_RSS_HASH_TPLZ; ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); if (!ctx) { @@ -832,9 +832,9 @@ static int ice_vc_handle_rss_cfg(struct ice_vf *vf, u8 *msg, bool add) goto error_param; } - ctx->info.q_opt_rss = FIELD_PREP(ICE_AQ_VSI_Q_OPT_RSS_LUT_M, - lut_type) | - (hash_type & ICE_AQ_VSI_Q_OPT_RSS_HASH_M); + ctx->info.q_opt_rss = + FIELD_PREP(ICE_AQ_VSI_Q_OPT_RSS_LUT_M, lut_type) | + FIELD_PREP(ICE_AQ_VSI_Q_OPT_RSS_HASH_M, hash_type); /* Preserve existing queueing option setting */ ctx->info.q_opt_rss |= (vsi->info.q_opt_rss & diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c index 699bcb1e2f3a..2e9ad27cb9d1 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c @@ -131,6 +131,7 @@ static int ice_vsi_manage_vlan_stripping(struct ice_vsi *vsi, bool ena) { struct ice_hw *hw = &vsi->back->hw; struct ice_vsi_ctx *ctxt; + u8 *ivf; int err; /* do not allow modifying VLAN stripping when a port VLAN is configured @@ -143,19 +144,24 @@ static int ice_vsi_manage_vlan_stripping(struct ice_vsi *vsi, bool ena) if (!ctxt) return -ENOMEM; + ivf = &ctxt->info.inner_vlan_flags; + /* Here we are configuring what the VSI should do with the VLAN tag in * the Rx packet. We can either leave the tag in the packet or put it in * the Rx descriptor. */ - if (ena) + if (ena) { /* Strip VLAN tag from Rx packet and put it in the desc */ - ctxt->info.inner_vlan_flags = ICE_AQ_VSI_INNER_VLAN_EMODE_STR_BOTH; - else + *ivf = FIELD_PREP(ICE_AQ_VSI_INNER_VLAN_EMODE_M, + ICE_AQ_VSI_INNER_VLAN_EMODE_STR_BOTH); + } else { /* Disable stripping. Leave tag in packet */ - ctxt->info.inner_vlan_flags = ICE_AQ_VSI_INNER_VLAN_EMODE_NOTHING; + *ivf = FIELD_PREP(ICE_AQ_VSI_INNER_VLAN_EMODE_M, + ICE_AQ_VSI_INNER_VLAN_EMODE_NOTHING); + } /* Allow all packets untagged/tagged */ - ctxt->info.inner_vlan_flags |= ICE_AQ_VSI_INNER_VLAN_TX_MODE_ALL; + *ivf |= ICE_AQ_VSI_INNER_VLAN_TX_MODE_ALL; ctxt->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_VLAN_VALID); From patchwork Wed Dec 6 01:01:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jesse Brandeburg X-Patchwork-Id: 13480915 X-Patchwork-Delegate: kuba@kernel.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="l4WDaooh" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 38E09181 for ; Tue, 5 Dec 2023 17:01:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1701824499; x=1733360499; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=kvzei3+ovK/51i0s2+v6v/D+4CvRA9trUDjXetYyV9U=; b=l4WDaoohoojHAramHnKxMKBZ+U7gwNpmikzwW53VpTlbw/yETp1X4IoE yjyCKyoU4E0HVc3VhtHUK/1B8XC/ipm5rst2yjfilCwS7WeXBCWtaKuWf VBbLWyFCTSEUlLUE9EeDuH0U+02eM3MOd0k43IiA6l1wPqYnOgdR0bp1z qfhsIVuK8FB0Yw6a5PuBBg/YRX16VUvqyPHkXeGATLpljup5GqZFucViq CAe9WwUTZFq1C+uOV8NiZck/azvcNICx2jVfb+15r/hQ3ZxKxi6VB97iX TYL8an9Jq7iBA1H8+eJEAnl4bhFFVujT+wgPlxPts22mIumZu9iTS18mp w==; X-IronPort-AV: E=McAfee;i="6600,9927,10915"; a="12700302" X-IronPort-AV: E=Sophos;i="6.04,254,1695711600"; d="scan'208";a="12700302" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Dec 2023 17:01:35 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10915"; a="841655262" X-IronPort-AV: E=Sophos;i="6.04,254,1695711600"; d="scan'208";a="841655262" Received: from jbrandeb-spr1.jf.intel.com ([10.166.28.233]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Dec 2023 17:01:33 -0800 From: Jesse Brandeburg To: intel-wired-lan@lists.osuosl.org Cc: Jesse Brandeburg , netdev@vger.kernel.org, aleksander.lobakin@intel.com, przemyslaw.kitszel@intel.com, horms@kernel.org, marcin.szycik@linux.intel.com, Julia Lawall , Sasha Neftin Subject: [PATCH iwl-next v2 08/15] igc: field prep conversion Date: Tue, 5 Dec 2023 17:01:07 -0800 Message-Id: <20231206010114.2259388-9-jesse.brandeburg@intel.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20231206010114.2259388-1-jesse.brandeburg@intel.com> References: <20231206010114.2259388-1-jesse.brandeburg@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Refactor igc driver to use FIELD_PREP(), which reduces lines of code and adds clarity of intent. This code was generated by the following coccinelle/spatch script and then manually repaired in a later patch. @prep2@ constant shift,mask; type T; expression a; @@ -(((T)(a) << shift) & mask) +FIELD_PREP(mask, a) @prep@ constant shift,mask; type T; expression a; @@ -((T)((a) << shift) & mask) +FIELD_PREP(mask, a) Cc: Julia Lawall Cc: Sasha Neftin Reviewed-by: Marcin Szycik Reviewed-by: Simon Horman Signed-off-by: Jesse Brandeburg --- v2: no change --- drivers/net/ethernet/intel/igc/igc_main.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c index 61db1d3bfa0b..d949289a3ddb 100644 --- a/drivers/net/ethernet/intel/igc/igc_main.c +++ b/drivers/net/ethernet/intel/igc/igc_main.c @@ -3452,8 +3452,8 @@ static int igc_write_flex_filter_ll(struct igc_adapter *adapter, /* Configure filter */ queuing = input->length & IGC_FHFT_LENGTH_MASK; - queuing |= (input->rx_queue << IGC_FHFT_QUEUE_SHIFT) & IGC_FHFT_QUEUE_MASK; - queuing |= (input->prio << IGC_FHFT_PRIO_SHIFT) & IGC_FHFT_PRIO_MASK; + queuing |= FIELD_PREP(IGC_FHFT_QUEUE_MASK, input->rx_queue); + queuing |= FIELD_PREP(IGC_FHFT_PRIO_MASK, input->prio); if (input->immediate_irq) queuing |= IGC_FHFT_IMM_INT; From patchwork Wed Dec 6 01:01:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jesse Brandeburg X-Patchwork-Id: 13480920 X-Patchwork-Delegate: kuba@kernel.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="kjVO+DoT" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BF19B1AA for ; Tue, 5 Dec 2023 17:01:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1701824499; x=1733360499; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=EUWVs5YbcU2J+hxjuoI50OPulMgDUrnl+4TE9uCkv1g=; b=kjVO+DoT6muK8hZA24kuNoig5+dUgcBZoxe665xiE94wKc/msIcPsk6z x6aUjr2ZTc3jyTiYixXfMPBcVa8LGAGqawa7sd0suvvU3niC4Ky0SUuMX LzutLGaBuO5Pt92ZHqVMuz3O7Hdf8883jTyOJwRq+kJyBpncob2pFh4RO nRD5xSZH+JgXc2GxaGIHwETwO1CVQWaKOG3gG6dF+SUokOhRsOzRZ6gW5 9lj0vL2LvX2ysSEwbDsJUsjT5vHSgk1qP9YS+tH0fOpfMSNg7HWi4/nit fcLG3vocq+KJPftpID/VRDvN3GEnZvQolf3oeZB69QyV9eersMQq2MSkZ A==; X-IronPort-AV: E=McAfee;i="6600,9927,10915"; a="12700306" X-IronPort-AV: E=Sophos;i="6.04,254,1695711600"; d="scan'208";a="12700306" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Dec 2023 17:01:35 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10915"; a="841655267" X-IronPort-AV: E=Sophos;i="6.04,254,1695711600"; d="scan'208";a="841655267" Received: from jbrandeb-spr1.jf.intel.com ([10.166.28.233]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Dec 2023 17:01:34 -0800 From: Jesse Brandeburg To: intel-wired-lan@lists.osuosl.org Cc: Jesse Brandeburg , netdev@vger.kernel.org, aleksander.lobakin@intel.com, przemyslaw.kitszel@intel.com, horms@kernel.org, marcin.szycik@linux.intel.com, Julia Lawall Subject: [PATCH iwl-next v2 09/15] intel: legacy: field get conversion Date: Tue, 5 Dec 2023 17:01:08 -0800 Message-Id: <20231206010114.2259388-10-jesse.brandeburg@intel.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20231206010114.2259388-1-jesse.brandeburg@intel.com> References: <20231206010114.2259388-1-jesse.brandeburg@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Refactor several older Intel drivers to use FIELD_GET(), which reduces lines of code and adds clarity of intent. This code was generated by the following coccinelle/spatch script and then manually repaired. @get@ constant shift,mask; type T; expression a; @@ ( -((T)((a) & mask) >> shift) +FIELD_GET(mask, a) and applied via: spatch --sp-file field_prep.cocci --in-place --dir \ drivers/net/ethernet/intel/ Cc: Julia Lawall CC: Alexander Lobakin Reviewed-by: Marcin Szycik Reviewed-by: Simon Horman Signed-off-by: Jesse Brandeburg Tested-by: Pucha Himasekhar Reddy (A Contingent worker at Intel) --- v2: update to le16_encode_bits in one spot --- drivers/net/ethernet/intel/e1000/e1000_hw.c | 45 ++++++++----------- .../net/ethernet/intel/e1000e/80003es2lan.c | 3 +- drivers/net/ethernet/intel/e1000e/82571.c | 3 +- drivers/net/ethernet/intel/e1000e/ethtool.c | 7 ++- drivers/net/ethernet/intel/e1000e/ich8lan.c | 18 +++----- drivers/net/ethernet/intel/e1000e/mac.c | 8 ++-- drivers/net/ethernet/intel/e1000e/netdev.c | 11 ++--- drivers/net/ethernet/intel/e1000e/phy.c | 17 +++---- drivers/net/ethernet/intel/fm10k/fm10k_pf.c | 3 +- drivers/net/ethernet/intel/fm10k/fm10k_vf.c | 9 ++-- drivers/net/ethernet/intel/igb/e1000_82575.c | 29 +++++------- drivers/net/ethernet/intel/igb/e1000_i210.c | 15 ++++--- drivers/net/ethernet/intel/igb/e1000_mac.c | 7 ++- drivers/net/ethernet/intel/igb/e1000_nvm.c | 14 +++--- drivers/net/ethernet/intel/igb/e1000_phy.c | 9 ++-- drivers/net/ethernet/intel/igb/igb_ethtool.c | 8 ++-- drivers/net/ethernet/intel/igb/igb_main.c | 4 +- drivers/net/ethernet/intel/igbvf/mbx.c | 1 + drivers/net/ethernet/intel/igbvf/netdev.c | 5 +-- .../net/ethernet/intel/ixgbe/ixgbe_common.c | 30 ++++++------- drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 2 +- drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c | 8 ++-- .../net/ethernet/intel/ixgbe/ixgbe_sriov.c | 8 ++-- drivers/net/ethernet/intel/ixgbe/ixgbe_x540.c | 8 ++-- drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c | 19 ++++---- 25 files changed, 123 insertions(+), 168 deletions(-) diff --git a/drivers/net/ethernet/intel/e1000/e1000_hw.c b/drivers/net/ethernet/intel/e1000/e1000_hw.c index 4576511c99f5..f9328f2e669f 100644 --- a/drivers/net/ethernet/intel/e1000/e1000_hw.c +++ b/drivers/net/ethernet/intel/e1000/e1000_hw.c @@ -3261,8 +3261,7 @@ static s32 e1000_phy_igp_get_info(struct e1000_hw *hw, return ret_val; phy_info->mdix_mode = - (e1000_auto_x_mode) ((phy_data & IGP01E1000_PSSR_MDIX) >> - IGP01E1000_PSSR_MDIX_SHIFT); + (e1000_auto_x_mode)FIELD_GET(IGP01E1000_PSSR_MDIX, phy_data); if ((phy_data & IGP01E1000_PSSR_SPEED_MASK) == IGP01E1000_PSSR_SPEED_1000MBPS) { @@ -3273,11 +3272,11 @@ static s32 e1000_phy_igp_get_info(struct e1000_hw *hw, if (ret_val) return ret_val; - phy_info->local_rx = ((phy_data & SR_1000T_LOCAL_RX_STATUS) >> - SR_1000T_LOCAL_RX_STATUS_SHIFT) ? + phy_info->local_rx = FIELD_GET(SR_1000T_LOCAL_RX_STATUS, + phy_data) ? e1000_1000t_rx_status_ok : e1000_1000t_rx_status_not_ok; - phy_info->remote_rx = ((phy_data & SR_1000T_REMOTE_RX_STATUS) >> - SR_1000T_REMOTE_RX_STATUS_SHIFT) ? + phy_info->remote_rx = FIELD_GET(SR_1000T_REMOTE_RX_STATUS, + phy_data) ? e1000_1000t_rx_status_ok : e1000_1000t_rx_status_not_ok; /* Get cable length */ @@ -3327,14 +3326,12 @@ static s32 e1000_phy_m88_get_info(struct e1000_hw *hw, return ret_val; phy_info->extended_10bt_distance = - ((phy_data & M88E1000_PSCR_10BT_EXT_DIST_ENABLE) >> - M88E1000_PSCR_10BT_EXT_DIST_ENABLE_SHIFT) ? + FIELD_GET(M88E1000_PSCR_10BT_EXT_DIST_ENABLE, phy_data) ? e1000_10bt_ext_dist_enable_lower : e1000_10bt_ext_dist_enable_normal; phy_info->polarity_correction = - ((phy_data & M88E1000_PSCR_POLARITY_REVERSAL) >> - M88E1000_PSCR_POLARITY_REVERSAL_SHIFT) ? + FIELD_GET(M88E1000_PSCR_POLARITY_REVERSAL, phy_data) ? e1000_polarity_reversal_disabled : e1000_polarity_reversal_enabled; /* Check polarity status */ @@ -3348,27 +3345,25 @@ static s32 e1000_phy_m88_get_info(struct e1000_hw *hw, return ret_val; phy_info->mdix_mode = - (e1000_auto_x_mode) ((phy_data & M88E1000_PSSR_MDIX) >> - M88E1000_PSSR_MDIX_SHIFT); + (e1000_auto_x_mode)FIELD_GET(M88E1000_PSSR_MDIX, phy_data); if ((phy_data & M88E1000_PSSR_SPEED) == M88E1000_PSSR_1000MBS) { /* Cable Length Estimation and Local/Remote Receiver Information * are only valid at 1000 Mbps. */ phy_info->cable_length = - (e1000_cable_length) ((phy_data & - M88E1000_PSSR_CABLE_LENGTH) >> - M88E1000_PSSR_CABLE_LENGTH_SHIFT); + (e1000_cable_length)FIELD_GET(M88E1000_PSSR_CABLE_LENGTH, + phy_data); ret_val = e1000_read_phy_reg(hw, PHY_1000T_STATUS, &phy_data); if (ret_val) return ret_val; - phy_info->local_rx = ((phy_data & SR_1000T_LOCAL_RX_STATUS) >> - SR_1000T_LOCAL_RX_STATUS_SHIFT) ? + phy_info->local_rx = FIELD_GET(SR_1000T_LOCAL_RX_STATUS, + phy_data) ? e1000_1000t_rx_status_ok : e1000_1000t_rx_status_not_ok; - phy_info->remote_rx = ((phy_data & SR_1000T_REMOTE_RX_STATUS) >> - SR_1000T_REMOTE_RX_STATUS_SHIFT) ? + phy_info->remote_rx = FIELD_GET(SR_1000T_REMOTE_RX_STATUS, + phy_data) ? e1000_1000t_rx_status_ok : e1000_1000t_rx_status_not_ok; } @@ -3516,7 +3511,7 @@ s32 e1000_init_eeprom_params(struct e1000_hw *hw) if (ret_val) return ret_val; eeprom_size = - (eeprom_size & EEPROM_SIZE_MASK) >> EEPROM_SIZE_SHIFT; + FIELD_GET(EEPROM_SIZE_MASK, eeprom_size); /* 256B eeprom size was not supported in earlier hardware, so we * bump eeprom_size up one to ensure that "1" (which maps to * 256B) is never the result used in the shifting logic below. @@ -4892,8 +4887,7 @@ static s32 e1000_get_cable_length(struct e1000_hw *hw, u16 *min_length, &phy_data); if (ret_val) return ret_val; - cable_length = (phy_data & M88E1000_PSSR_CABLE_LENGTH) >> - M88E1000_PSSR_CABLE_LENGTH_SHIFT; + cable_length = FIELD_GET(M88E1000_PSSR_CABLE_LENGTH, phy_data); /* Convert the enum value to ranged values */ switch (cable_length) { @@ -5002,8 +4996,7 @@ static s32 e1000_check_polarity(struct e1000_hw *hw, &phy_data); if (ret_val) return ret_val; - *polarity = ((phy_data & M88E1000_PSSR_REV_POLARITY) >> - M88E1000_PSSR_REV_POLARITY_SHIFT) ? + *polarity = FIELD_GET(M88E1000_PSSR_REV_POLARITY, phy_data) ? e1000_rev_polarity_reversed : e1000_rev_polarity_normal; } else if (hw->phy_type == e1000_phy_igp) { @@ -5073,8 +5066,8 @@ static s32 e1000_check_downshift(struct e1000_hw *hw) if (ret_val) return ret_val; - hw->speed_downgraded = (phy_data & M88E1000_PSSR_DOWNSHIFT) >> - M88E1000_PSSR_DOWNSHIFT_SHIFT; + hw->speed_downgraded = FIELD_GET(M88E1000_PSSR_DOWNSHIFT, + phy_data); } return E1000_SUCCESS; diff --git a/drivers/net/ethernet/intel/e1000e/80003es2lan.c b/drivers/net/ethernet/intel/e1000e/80003es2lan.c index 31fce3e4e8af..4eb1ceaf865a 100644 --- a/drivers/net/ethernet/intel/e1000e/80003es2lan.c +++ b/drivers/net/ethernet/intel/e1000e/80003es2lan.c @@ -92,8 +92,7 @@ static s32 e1000_init_nvm_params_80003es2lan(struct e1000_hw *hw) nvm->type = e1000_nvm_eeprom_spi; - size = (u16)((eecd & E1000_EECD_SIZE_EX_MASK) >> - E1000_EECD_SIZE_EX_SHIFT); + size = (u16)FIELD_GET(E1000_EECD_SIZE_EX_MASK, eecd); /* Added to a constant, "size" becomes the left-shift value * for setting word_size. diff --git a/drivers/net/ethernet/intel/e1000e/82571.c b/drivers/net/ethernet/intel/e1000e/82571.c index 0b1e890dd583..969f855a79ee 100644 --- a/drivers/net/ethernet/intel/e1000e/82571.c +++ b/drivers/net/ethernet/intel/e1000e/82571.c @@ -157,8 +157,7 @@ static s32 e1000_init_nvm_params_82571(struct e1000_hw *hw) fallthrough; default: nvm->type = e1000_nvm_eeprom_spi; - size = (u16)((eecd & E1000_EECD_SIZE_EX_MASK) >> - E1000_EECD_SIZE_EX_SHIFT); + size = (u16)FIELD_GET(E1000_EECD_SIZE_EX_MASK, eecd); /* Added to a constant, "size" becomes the left-shift value * for setting word_size. */ diff --git a/drivers/net/ethernet/intel/e1000e/ethtool.c b/drivers/net/ethernet/intel/e1000e/ethtool.c index 9835e6a90d56..fc0f98ea6133 100644 --- a/drivers/net/ethernet/intel/e1000e/ethtool.c +++ b/drivers/net/ethernet/intel/e1000e/ethtool.c @@ -654,8 +654,8 @@ static void e1000_get_drvinfo(struct net_device *netdev, */ snprintf(drvinfo->fw_version, sizeof(drvinfo->fw_version), "%d.%d-%d", - (adapter->eeprom_vers & 0xF000) >> 12, - (adapter->eeprom_vers & 0x0FF0) >> 4, + FIELD_GET(0xF000, adapter->eeprom_vers), + FIELD_GET(0x0FF0, adapter->eeprom_vers), (adapter->eeprom_vers & 0x000F)); strscpy(drvinfo->bus_info, pci_name(adapter->pdev), @@ -925,8 +925,7 @@ static int e1000_reg_test(struct e1000_adapter *adapter, u64 *data) } if (mac->type >= e1000_pch_lpt) - wlock_mac = (er32(FWSM) & E1000_FWSM_WLOCK_MAC_MASK) >> - E1000_FWSM_WLOCK_MAC_SHIFT; + wlock_mac = FIELD_GET(E1000_FWSM_WLOCK_MAC_MASK, er32(FWSM)); for (i = 0; i < mac->rar_entry_count; i++) { if (mac->type >= e1000_pch_lpt) { diff --git a/drivers/net/ethernet/intel/e1000e/ich8lan.c b/drivers/net/ethernet/intel/e1000e/ich8lan.c index 39e9fc601bf5..a2788fd5f8bb 100644 --- a/drivers/net/ethernet/intel/e1000e/ich8lan.c +++ b/drivers/net/ethernet/intel/e1000e/ich8lan.c @@ -1072,13 +1072,11 @@ static s32 e1000_platform_pm_pch_lpt(struct e1000_hw *hw, bool link) lat_enc_d = (lat_enc & E1000_LTRV_VALUE_MASK) * (1U << (E1000_LTRV_SCALE_FACTOR * - ((lat_enc & E1000_LTRV_SCALE_MASK) - >> E1000_LTRV_SCALE_SHIFT))); + FIELD_GET(E1000_LTRV_SCALE_MASK, lat_enc))); max_ltr_enc_d = (max_ltr_enc & E1000_LTRV_VALUE_MASK) * - (1U << (E1000_LTRV_SCALE_FACTOR * - ((max_ltr_enc & E1000_LTRV_SCALE_MASK) - >> E1000_LTRV_SCALE_SHIFT))); + (1U << (E1000_LTRV_SCALE_FACTOR * + FIELD_GET(E1000_LTRV_SCALE_MASK, max_ltr_enc))); if (lat_enc_d > max_ltr_enc_d) lat_enc = max_ltr_enc; @@ -2075,8 +2073,7 @@ static s32 e1000_write_smbus_addr(struct e1000_hw *hw) { u16 phy_data; u32 strap = er32(STRAP); - u32 freq = (strap & E1000_STRAP_SMT_FREQ_MASK) >> - E1000_STRAP_SMT_FREQ_SHIFT; + u32 freq = FIELD_GET(E1000_STRAP_SMT_FREQ_MASK, strap); s32 ret_val; strap &= E1000_STRAP_SMBUS_ADDRESS_MASK; @@ -2562,8 +2559,7 @@ void e1000_copy_rx_addrs_to_phy_ich8lan(struct e1000_hw *hw) hw->phy.ops.write_reg_page(hw, BM_RAR_H(i), (u16)(mac_reg & 0xFFFF)); hw->phy.ops.write_reg_page(hw, BM_RAR_CTRL(i), - (u16)((mac_reg & E1000_RAH_AV) - >> 16)); + FIELD_GET(E1000_RAH_AV, mac_reg)); } e1000_disable_phy_wakeup_reg_access_bm(hw, &phy_reg); @@ -3205,7 +3201,7 @@ static s32 e1000_valid_nvm_bank_detect_ich8lan(struct e1000_hw *hw, u32 *bank) &nvm_dword); if (ret_val) return ret_val; - sig_byte = (u8)((nvm_dword & 0xFF00) >> 8); + sig_byte = FIELD_GET(0xFF00, nvm_dword); if ((sig_byte & E1000_ICH_NVM_VALID_SIG_MASK) == E1000_ICH_NVM_SIG_VALUE) { *bank = 0; @@ -3218,7 +3214,7 @@ static s32 e1000_valid_nvm_bank_detect_ich8lan(struct e1000_hw *hw, u32 *bank) &nvm_dword); if (ret_val) return ret_val; - sig_byte = (u8)((nvm_dword & 0xFF00) >> 8); + sig_byte = FIELD_GET(0xFF00, nvm_dword); if ((sig_byte & E1000_ICH_NVM_VALID_SIG_MASK) == E1000_ICH_NVM_SIG_VALUE) { *bank = 1; diff --git a/drivers/net/ethernet/intel/e1000e/mac.c b/drivers/net/ethernet/intel/e1000e/mac.c index 5df7ad93f3d7..5abf063236a8 100644 --- a/drivers/net/ethernet/intel/e1000e/mac.c +++ b/drivers/net/ethernet/intel/e1000e/mac.c @@ -25,9 +25,9 @@ s32 e1000e_get_bus_info_pcie(struct e1000_hw *hw) pci_read_config_word(adapter->pdev, cap_offset + PCIE_LINK_STATUS, &pcie_link_status); - bus->width = (enum e1000_bus_width)((pcie_link_status & - PCIE_LINK_WIDTH_MASK) >> - PCIE_LINK_WIDTH_SHIFT); + bus->width = + (enum e1000_bus_width)FIELD_GET(PCIE_LINK_WIDTH_MASK, + pcie_link_status); } mac->ops.set_lan_id(hw); @@ -52,7 +52,7 @@ void e1000_set_lan_id_multi_port_pcie(struct e1000_hw *hw) * for the device regardless of function swap state. */ reg = er32(STATUS); - bus->func = (reg & E1000_STATUS_FUNC_MASK) >> E1000_STATUS_FUNC_SHIFT; + bus->func = FIELD_GET(E1000_STATUS_FUNC_MASK, reg); } /** diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c index f536c856727c..af5d9d97a0d6 100644 --- a/drivers/net/ethernet/intel/e1000e/netdev.c +++ b/drivers/net/ethernet/intel/e1000e/netdev.c @@ -1788,8 +1788,7 @@ static irqreturn_t e1000_intr_msi(int __always_unused irq, void *data) adapter->corr_errors += pbeccsts & E1000_PBECCSTS_CORR_ERR_CNT_MASK; adapter->uncorr_errors += - (pbeccsts & E1000_PBECCSTS_UNCORR_ERR_CNT_MASK) >> - E1000_PBECCSTS_UNCORR_ERR_CNT_SHIFT; + FIELD_GET(E1000_PBECCSTS_UNCORR_ERR_CNT_MASK, pbeccsts); /* Do the reset outside of interrupt context */ schedule_work(&adapter->reset_task); @@ -1868,8 +1867,7 @@ static irqreturn_t e1000_intr(int __always_unused irq, void *data) adapter->corr_errors += pbeccsts & E1000_PBECCSTS_CORR_ERR_CNT_MASK; adapter->uncorr_errors += - (pbeccsts & E1000_PBECCSTS_UNCORR_ERR_CNT_MASK) >> - E1000_PBECCSTS_UNCORR_ERR_CNT_SHIFT; + FIELD_GET(E1000_PBECCSTS_UNCORR_ERR_CNT_MASK, pbeccsts); /* Do the reset outside of interrupt context */ schedule_work(&adapter->reset_task); @@ -5031,8 +5029,7 @@ static void e1000e_update_stats(struct e1000_adapter *adapter) adapter->corr_errors += pbeccsts & E1000_PBECCSTS_CORR_ERR_CNT_MASK; adapter->uncorr_errors += - (pbeccsts & E1000_PBECCSTS_UNCORR_ERR_CNT_MASK) >> - E1000_PBECCSTS_UNCORR_ERR_CNT_SHIFT; + FIELD_GET(E1000_PBECCSTS_UNCORR_ERR_CNT_MASK, pbeccsts); } } @@ -6249,7 +6246,7 @@ static int e1000_init_phy_wakeup(struct e1000_adapter *adapter, u32 wufc) phy_reg |= BM_RCTL_MPE; phy_reg &= ~(BM_RCTL_MO_MASK); if (mac_reg & E1000_RCTL_MO_3) - phy_reg |= (((mac_reg & E1000_RCTL_MO_3) >> E1000_RCTL_MO_SHIFT) + phy_reg |= (FIELD_GET(E1000_RCTL_MO_3, mac_reg) << BM_RCTL_MO_SHIFT); if (mac_reg & E1000_RCTL_BAM) phy_reg |= BM_RCTL_BAM; diff --git a/drivers/net/ethernet/intel/e1000e/phy.c b/drivers/net/ethernet/intel/e1000e/phy.c index 2498f021eb02..5e329156d1ba 100644 --- a/drivers/net/ethernet/intel/e1000e/phy.c +++ b/drivers/net/ethernet/intel/e1000e/phy.c @@ -154,10 +154,9 @@ s32 e1000e_read_phy_reg_mdic(struct e1000_hw *hw, u32 offset, u16 *data) e_dbg("MDI Read PHY Reg Address %d Error\n", offset); return -E1000_ERR_PHY; } - if (((mdic & E1000_MDIC_REG_MASK) >> E1000_MDIC_REG_SHIFT) != offset) { + if (FIELD_GET(E1000_MDIC_REG_MASK, mdic) != offset) { e_dbg("MDI Read offset error - requested %d, returned %d\n", - offset, - (mdic & E1000_MDIC_REG_MASK) >> E1000_MDIC_REG_SHIFT); + offset, FIELD_GET(E1000_MDIC_REG_MASK, mdic)); return -E1000_ERR_PHY; } *data = (u16)mdic; @@ -167,7 +166,6 @@ s32 e1000e_read_phy_reg_mdic(struct e1000_hw *hw, u32 offset, u16 *data) */ if (hw->mac.type == e1000_pch2lan) udelay(100); - return 0; } @@ -218,10 +216,9 @@ s32 e1000e_write_phy_reg_mdic(struct e1000_hw *hw, u32 offset, u16 data) e_dbg("MDI Write PHY Red Address %d Error\n", offset); return -E1000_ERR_PHY; } - if (((mdic & E1000_MDIC_REG_MASK) >> E1000_MDIC_REG_SHIFT) != offset) { + if (FIELD_GET(E1000_MDIC_REG_MASK, mdic) != offset) { e_dbg("MDI Write offset error - requested %d, returned %d\n", - offset, - (mdic & E1000_MDIC_REG_MASK) >> E1000_MDIC_REG_SHIFT); + offset, FIELD_GET(E1000_MDIC_REG_MASK, mdic)); return -E1000_ERR_PHY; } @@ -1792,8 +1789,7 @@ s32 e1000e_get_cable_length_m88(struct e1000_hw *hw) if (ret_val) return ret_val; - index = ((phy_data & M88E1000_PSSR_CABLE_LENGTH) >> - M88E1000_PSSR_CABLE_LENGTH_SHIFT); + index = FIELD_GET(M88E1000_PSSR_CABLE_LENGTH, phy_data); if (index >= M88E1000_CABLE_LENGTH_TABLE_SIZE - 1) return -E1000_ERR_PHY; @@ -3233,8 +3229,7 @@ s32 e1000_get_cable_length_82577(struct e1000_hw *hw) if (ret_val) return ret_val; - length = ((phy_data & I82577_DSTATUS_CABLE_LENGTH) >> - I82577_DSTATUS_CABLE_LENGTH_SHIFT); + length = FIELD_GET(I82577_DSTATUS_CABLE_LENGTH, phy_data); if (length == E1000_CABLE_LENGTH_UNDEFINED) return -E1000_ERR_PHY; diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_pf.c b/drivers/net/ethernet/intel/fm10k/fm10k_pf.c index 1eea0ec5dbcf..98861cc6df7c 100644 --- a/drivers/net/ethernet/intel/fm10k/fm10k_pf.c +++ b/drivers/net/ethernet/intel/fm10k/fm10k_pf.c @@ -1575,8 +1575,7 @@ static s32 fm10k_get_fault_pf(struct fm10k_hw *hw, int type, if (func & FM10K_FAULT_FUNC_PF) fault->func = 0; else - fault->func = 1 + ((func & FM10K_FAULT_FUNC_VF_MASK) >> - FM10K_FAULT_FUNC_VF_SHIFT); + fault->func = 1 + FIELD_GET(FM10K_FAULT_FUNC_VF_MASK, func); /* record fault type */ fault->type = func & FM10K_FAULT_FUNC_TYPE_MASK; diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_vf.c b/drivers/net/ethernet/intel/fm10k/fm10k_vf.c index c50928ec14ff..7fb1961f2921 100644 --- a/drivers/net/ethernet/intel/fm10k/fm10k_vf.c +++ b/drivers/net/ethernet/intel/fm10k/fm10k_vf.c @@ -127,15 +127,14 @@ static s32 fm10k_init_hw_vf(struct fm10k_hw *hw) hw->mac.max_queues = i; /* fetch default VLAN and ITR scale */ - hw->mac.default_vid = (fm10k_read_reg(hw, FM10K_TXQCTL(0)) & - FM10K_TXQCTL_VID_MASK) >> FM10K_TXQCTL_VID_SHIFT; + hw->mac.default_vid = FIELD_GET(FM10K_TXQCTL_VID_MASK, + fm10k_read_reg(hw, FM10K_TXQCTL(0))); /* Read the ITR scale from TDLEN. See the definition of * FM10K_TDLEN_ITR_SCALE_SHIFT for more information about how TDLEN is * used here. */ - hw->mac.itr_scale = (fm10k_read_reg(hw, FM10K_TDLEN(0)) & - FM10K_TDLEN_ITR_SCALE_MASK) >> - FM10K_TDLEN_ITR_SCALE_SHIFT; + hw->mac.itr_scale = FIELD_GET(FM10K_TDLEN_ITR_SCALE_MASK, + fm10k_read_reg(hw, FM10K_TDLEN(0))); return 0; diff --git a/drivers/net/ethernet/intel/igb/e1000_82575.c b/drivers/net/ethernet/intel/igb/e1000_82575.c index 8d6e44ee1895..64dfc362d1dc 100644 --- a/drivers/net/ethernet/intel/igb/e1000_82575.c +++ b/drivers/net/ethernet/intel/igb/e1000_82575.c @@ -222,8 +222,7 @@ static s32 igb_init_phy_params_82575(struct e1000_hw *hw) } /* set lan id */ - hw->bus.func = (rd32(E1000_STATUS) & E1000_STATUS_FUNC_MASK) >> - E1000_STATUS_FUNC_SHIFT; + hw->bus.func = FIELD_GET(E1000_STATUS_FUNC_MASK, rd32(E1000_STATUS)); /* Set phy->phy_addr and phy->id. */ ret_val = igb_get_phy_id_82575(hw); @@ -262,8 +261,8 @@ static s32 igb_init_phy_params_82575(struct e1000_hw *hw) if (ret_val) goto out; - data = (data & E1000_M88E1112_MAC_CTRL_1_MODE_MASK) >> - E1000_M88E1112_MAC_CTRL_1_MODE_SHIFT; + data = FIELD_GET(E1000_M88E1112_MAC_CTRL_1_MODE_MASK, + data); if (data == E1000_M88E1112_AUTO_COPPER_SGMII || data == E1000_M88E1112_AUTO_COPPER_BASEX) hw->mac.ops.check_for_link = @@ -330,8 +329,7 @@ static s32 igb_init_nvm_params_82575(struct e1000_hw *hw) u32 eecd = rd32(E1000_EECD); u16 size; - size = (u16)((eecd & E1000_EECD_SIZE_EX_MASK) >> - E1000_EECD_SIZE_EX_SHIFT); + size = FIELD_GET(E1000_EECD_SIZE_EX_MASK, eecd); /* Added to a constant, "size" becomes the left-shift value * for setting word_size. @@ -2798,7 +2796,7 @@ static s32 igb_get_thermal_sensor_data_generic(struct e1000_hw *hw) return 0; hw->nvm.ops.read(hw, ets_offset, 1, &ets_cfg); - if (((ets_cfg & NVM_ETS_TYPE_MASK) >> NVM_ETS_TYPE_SHIFT) + if (FIELD_GET(NVM_ETS_TYPE_MASK, ets_cfg) != NVM_ETS_TYPE_EMC) return E1000_NOT_IMPLEMENTED; @@ -2808,10 +2806,8 @@ static s32 igb_get_thermal_sensor_data_generic(struct e1000_hw *hw) for (i = 1; i < num_sensors; i++) { hw->nvm.ops.read(hw, (ets_offset + i), 1, &ets_sensor); - sensor_index = ((ets_sensor & NVM_ETS_DATA_INDEX_MASK) >> - NVM_ETS_DATA_INDEX_SHIFT); - sensor_location = ((ets_sensor & NVM_ETS_DATA_LOC_MASK) >> - NVM_ETS_DATA_LOC_SHIFT); + sensor_index = FIELD_GET(NVM_ETS_DATA_INDEX_MASK, ets_sensor); + sensor_location = FIELD_GET(NVM_ETS_DATA_LOC_MASK, ets_sensor); if (sensor_location != 0) hw->phy.ops.read_i2c_byte(hw, @@ -2859,20 +2855,17 @@ static s32 igb_init_thermal_sensor_thresh_generic(struct e1000_hw *hw) return 0; hw->nvm.ops.read(hw, ets_offset, 1, &ets_cfg); - if (((ets_cfg & NVM_ETS_TYPE_MASK) >> NVM_ETS_TYPE_SHIFT) + if (FIELD_GET(NVM_ETS_TYPE_MASK, ets_cfg) != NVM_ETS_TYPE_EMC) return E1000_NOT_IMPLEMENTED; - low_thresh_delta = ((ets_cfg & NVM_ETS_LTHRES_DELTA_MASK) >> - NVM_ETS_LTHRES_DELTA_SHIFT); + low_thresh_delta = FIELD_GET(NVM_ETS_LTHRES_DELTA_MASK, ets_cfg); num_sensors = (ets_cfg & NVM_ETS_NUM_SENSORS_MASK); for (i = 1; i <= num_sensors; i++) { hw->nvm.ops.read(hw, (ets_offset + i), 1, &ets_sensor); - sensor_index = ((ets_sensor & NVM_ETS_DATA_INDEX_MASK) >> - NVM_ETS_DATA_INDEX_SHIFT); - sensor_location = ((ets_sensor & NVM_ETS_DATA_LOC_MASK) >> - NVM_ETS_DATA_LOC_SHIFT); + sensor_index = FIELD_GET(NVM_ETS_DATA_INDEX_MASK, ets_sensor); + sensor_location = FIELD_GET(NVM_ETS_DATA_LOC_MASK, ets_sensor); therm_limit = ets_sensor & NVM_ETS_DATA_HTHRESH_MASK; hw->phy.ops.write_i2c_byte(hw, diff --git a/drivers/net/ethernet/intel/igb/e1000_i210.c b/drivers/net/ethernet/intel/igb/e1000_i210.c index 53b396fd194a..503b239868e8 100644 --- a/drivers/net/ethernet/intel/igb/e1000_i210.c +++ b/drivers/net/ethernet/intel/igb/e1000_i210.c @@ -473,7 +473,7 @@ s32 igb_read_invm_version(struct e1000_hw *hw, /* Check if we have second version location used */ else if ((i == 1) && ((*record & E1000_INVM_VER_FIELD_TWO) == 0)) { - version = (*record & E1000_INVM_VER_FIELD_ONE) >> 3; + version = FIELD_GET(E1000_INVM_VER_FIELD_ONE, *record); status = 0; break; } @@ -483,8 +483,8 @@ s32 igb_read_invm_version(struct e1000_hw *hw, else if ((((*record & E1000_INVM_VER_FIELD_ONE) == 0) && ((*record & 0x3) == 0)) || (((*record & 0x3) != 0) && (i != 1))) { - version = (*next_record & E1000_INVM_VER_FIELD_TWO) - >> 13; + version = FIELD_GET(E1000_INVM_VER_FIELD_TWO, + *next_record); status = 0; break; } @@ -493,15 +493,15 @@ s32 igb_read_invm_version(struct e1000_hw *hw, */ else if (((*record & E1000_INVM_VER_FIELD_TWO) == 0) && ((*record & 0x3) == 0)) { - version = (*record & E1000_INVM_VER_FIELD_ONE) >> 3; + version = FIELD_GET(E1000_INVM_VER_FIELD_ONE, *record); status = 0; break; } } if (!status) { - invm_ver->invm_major = (version & E1000_INVM_MAJOR_MASK) - >> E1000_INVM_MAJOR_SHIFT; + invm_ver->invm_major = FIELD_GET(E1000_INVM_MAJOR_MASK, + version); invm_ver->invm_minor = version & E1000_INVM_MINOR_MASK; } /* Read Image Type */ @@ -520,7 +520,8 @@ s32 igb_read_invm_version(struct e1000_hw *hw, ((*record & E1000_INVM_IMGTYPE_FIELD) == 0)) || ((((*record & 0x3) != 0) && (i != 1)))) { invm_ver->invm_img_type = - (*next_record & E1000_INVM_IMGTYPE_FIELD) >> 23; + FIELD_GET(E1000_INVM_IMGTYPE_FIELD, + *next_record); status = 0; break; } diff --git a/drivers/net/ethernet/intel/igb/e1000_mac.c b/drivers/net/ethernet/intel/igb/e1000_mac.c index caf91c6f52b4..00131636c4cf 100644 --- a/drivers/net/ethernet/intel/igb/e1000_mac.c +++ b/drivers/net/ethernet/intel/igb/e1000_mac.c @@ -50,13 +50,12 @@ s32 igb_get_bus_info_pcie(struct e1000_hw *hw) break; } - bus->width = (enum e1000_bus_width)((pcie_link_status & - PCI_EXP_LNKSTA_NLW) >> - PCI_EXP_LNKSTA_NLW_SHIFT); + bus->width = (enum e1000_bus_width)FIELD_GET(PCI_EXP_LNKSTA_NLW, + pcie_link_status); } reg = rd32(E1000_STATUS); - bus->func = (reg & E1000_STATUS_FUNC_MASK) >> E1000_STATUS_FUNC_SHIFT; + bus->func = FIELD_GET(E1000_STATUS_FUNC_MASK, reg); return 0; } diff --git a/drivers/net/ethernet/intel/igb/e1000_nvm.c b/drivers/net/ethernet/intel/igb/e1000_nvm.c index 0da57e89593a..2dcd64d6dec3 100644 --- a/drivers/net/ethernet/intel/igb/e1000_nvm.c +++ b/drivers/net/ethernet/intel/igb/e1000_nvm.c @@ -708,10 +708,10 @@ void igb_get_fw_version(struct e1000_hw *hw, struct e1000_fw_version *fw_vers) */ if ((etrack_test & NVM_MAJOR_MASK) != NVM_ETRACK_VALID) { hw->nvm.ops.read(hw, NVM_VERSION, 1, &fw_version); - fw_vers->eep_major = (fw_version & NVM_MAJOR_MASK) - >> NVM_MAJOR_SHIFT; - fw_vers->eep_minor = (fw_version & NVM_MINOR_MASK) - >> NVM_MINOR_SHIFT; + fw_vers->eep_major = FIELD_GET(NVM_MAJOR_MASK, + fw_version); + fw_vers->eep_minor = FIELD_GET(NVM_MINOR_MASK, + fw_version); fw_vers->eep_build = (fw_version & NVM_IMAGE_ID_MASK); goto etrack_id; } @@ -753,15 +753,13 @@ void igb_get_fw_version(struct e1000_hw *hw, struct e1000_fw_version *fw_vers) return; } hw->nvm.ops.read(hw, NVM_VERSION, 1, &fw_version); - fw_vers->eep_major = (fw_version & NVM_MAJOR_MASK) - >> NVM_MAJOR_SHIFT; + fw_vers->eep_major = FIELD_GET(NVM_MAJOR_MASK, fw_version); /* check for old style version format in newer images*/ if ((fw_version & NVM_NEW_DEC_MASK) == 0x0) { eeprom_verl = (fw_version & NVM_COMB_VER_MASK); } else { - eeprom_verl = (fw_version & NVM_MINOR_MASK) - >> NVM_MINOR_SHIFT; + eeprom_verl = FIELD_GET(NVM_MINOR_MASK, fw_version); } /* Convert minor value to hex before assigning to output struct * Val to be converted will not be higher than 99, per tool output diff --git a/drivers/net/ethernet/intel/igb/e1000_phy.c b/drivers/net/ethernet/intel/igb/e1000_phy.c index c84e7356cdb1..cd65008c7ef5 100644 --- a/drivers/net/ethernet/intel/igb/e1000_phy.c +++ b/drivers/net/ethernet/intel/igb/e1000_phy.c @@ -1682,8 +1682,7 @@ s32 igb_get_cable_length_m88(struct e1000_hw *hw) if (ret_val) goto out; - index = (phy_data & M88E1000_PSSR_CABLE_LENGTH) >> - M88E1000_PSSR_CABLE_LENGTH_SHIFT; + index = FIELD_GET(M88E1000_PSSR_CABLE_LENGTH, phy_data); if (index >= ARRAY_SIZE(e1000_m88_cable_length_table) - 1) { ret_val = -E1000_ERR_PHY; goto out; @@ -1796,8 +1795,7 @@ s32 igb_get_cable_length_m88_gen2(struct e1000_hw *hw) if (ret_val) goto out; - index = (phy_data & M88E1000_PSSR_CABLE_LENGTH) >> - M88E1000_PSSR_CABLE_LENGTH_SHIFT; + index = FIELD_GET(M88E1000_PSSR_CABLE_LENGTH, phy_data); if (index >= ARRAY_SIZE(e1000_m88_cable_length_table) - 1) { ret_val = -E1000_ERR_PHY; goto out; @@ -2578,8 +2576,7 @@ s32 igb_get_cable_length_82580(struct e1000_hw *hw) if (ret_val) goto out; - length = (phy_data & I82580_DSTATUS_CABLE_LENGTH) >> - I82580_DSTATUS_CABLE_LENGTH_SHIFT; + length = FIELD_GET(I82580_DSTATUS_CABLE_LENGTH, phy_data); if (length == E1000_CABLE_LENGTH_UNDEFINED) ret_val = -E1000_ERR_PHY; diff --git a/drivers/net/ethernet/intel/igb/igb_ethtool.c b/drivers/net/ethernet/intel/igb/igb_ethtool.c index f03977f2323e..3ba23fb87551 100644 --- a/drivers/net/ethernet/intel/igb/igb_ethtool.c +++ b/drivers/net/ethernet/intel/igb/igb_ethtool.c @@ -2434,7 +2434,7 @@ static int igb_get_ts_info(struct net_device *dev, } } -#define ETHER_TYPE_FULL_MASK ((__force __be16)~0) +#define ETHER_TYPE_FULL_MASK cpu_to_be16(FIELD_MAX(U16_MAX)) static int igb_get_ethtool_nfc_entry(struct igb_adapter *adapter, struct ethtool_rxnfc *cmd) { @@ -2732,8 +2732,8 @@ static int igb_rxnfc_write_vlan_prio_filter(struct igb_adapter *adapter, u32 vlapqf; vlapqf = rd32(E1000_VLAPQF); - vlan_priority = (ntohs(input->filter.vlan_tci) & VLAN_PRIO_MASK) - >> VLAN_PRIO_SHIFT; + vlan_priority = FIELD_GET(VLAN_PRIO_MASK, + ntohs(input->filter.vlan_tci)); queue_index = (vlapqf >> (vlan_priority * 4)) & E1000_VLAPQF_QUEUE_MASK; /* check whether this vlan prio is already set */ @@ -2816,7 +2816,7 @@ static void igb_clear_vlan_prio_filter(struct igb_adapter *adapter, u8 vlan_priority; u32 vlapqf; - vlan_priority = (vlan_tci & VLAN_PRIO_MASK) >> VLAN_PRIO_SHIFT; + vlan_priority = FIELD_GET(VLAN_PRIO_MASK, vlan_tci); vlapqf = rd32(E1000_VLAPQF); vlapqf &= ~E1000_VLAPQF_P_VALID(vlan_priority); diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c index 897eb36bb609..4df8d4153aa5 100644 --- a/drivers/net/ethernet/intel/igb/igb_main.c +++ b/drivers/net/ethernet/intel/igb/igb_main.c @@ -7295,7 +7295,7 @@ static int igb_set_vf_promisc(struct igb_adapter *adapter, u32 *msgbuf, u32 vf) static int igb_set_vf_multicasts(struct igb_adapter *adapter, u32 *msgbuf, u32 vf) { - int n = (msgbuf[0] & E1000_VT_MSGINFO_MASK) >> E1000_VT_MSGINFO_SHIFT; + int n = FIELD_GET(E1000_VT_MSGINFO_MASK, msgbuf[0]); u16 *hash_list = (u16 *)&msgbuf[1]; struct vf_data_storage *vf_data = &adapter->vf_data[vf]; int i; @@ -7555,7 +7555,7 @@ static int igb_ndo_set_vf_vlan(struct net_device *netdev, int vf, static int igb_set_vf_vlan_msg(struct igb_adapter *adapter, u32 *msgbuf, u32 vf) { - int add = (msgbuf[0] & E1000_VT_MSGINFO_MASK) >> E1000_VT_MSGINFO_SHIFT; + int add = FIELD_GET(E1000_VT_MSGINFO_MASK, msgbuf[0]); int vid = (msgbuf[1] & E1000_VLVF_VLANID_MASK); int ret; diff --git a/drivers/net/ethernet/intel/igbvf/mbx.c b/drivers/net/ethernet/intel/igbvf/mbx.c index a3cd7ac48d4b..d15282ee5ea8 100644 --- a/drivers/net/ethernet/intel/igbvf/mbx.c +++ b/drivers/net/ethernet/intel/igbvf/mbx.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 /* Copyright(c) 2009 - 2018 Intel Corporation. */ +#include #include "mbx.h" /** diff --git a/drivers/net/ethernet/intel/igbvf/netdev.c b/drivers/net/ethernet/intel/igbvf/netdev.c index e6c1fbee049e..a4d4f00e6a87 100644 --- a/drivers/net/ethernet/intel/igbvf/netdev.c +++ b/drivers/net/ethernet/intel/igbvf/netdev.c @@ -273,9 +273,8 @@ static bool igbvf_clean_rx_irq(struct igbvf_adapter *adapter, * that case, it fills the header buffer and spills the rest * into the page. */ - hlen = (le16_to_cpu(rx_desc->wb.lower.lo_dword.hs_rss.hdr_info) - & E1000_RXDADV_HDRBUFLEN_MASK) >> - E1000_RXDADV_HDRBUFLEN_SHIFT; + hlen = le16_get_bits(rx_desc->wb.lower.lo_dword.hs_rss.hdr_info, + E1000_RXDADV_HDRBUFLEN_MASK); if (hlen > adapter->rx_ps_hdr_size) hlen = adapter->rx_ps_hdr_size; diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c index 878dd8dff528..7d7bd44448c4 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c @@ -684,7 +684,7 @@ void ixgbe_set_lan_id_multi_port_pcie(struct ixgbe_hw *hw) u32 reg; reg = IXGBE_READ_REG(hw, IXGBE_STATUS); - bus->func = (reg & IXGBE_STATUS_LAN_ID) >> IXGBE_STATUS_LAN_ID_SHIFT; + bus->func = FIELD_GET(IXGBE_STATUS_LAN_ID, reg); bus->lan_id = bus->func; /* check for a port swap */ @@ -695,8 +695,8 @@ void ixgbe_set_lan_id_multi_port_pcie(struct ixgbe_hw *hw) /* Get MAC instance from EEPROM for configuring CS4227 */ if (hw->device_id == IXGBE_DEV_ID_X550EM_A_SFP) { hw->eeprom.ops.read(hw, IXGBE_EEPROM_CTRL_4, &ee_ctrl_4); - bus->instance_id = (ee_ctrl_4 & IXGBE_EE_CTRL_4_INST_ID) >> - IXGBE_EE_CTRL_4_INST_ID_SHIFT; + bus->instance_id = FIELD_GET(IXGBE_EE_CTRL_4_INST_ID, + ee_ctrl_4); } } @@ -870,10 +870,9 @@ s32 ixgbe_init_eeprom_params_generic(struct ixgbe_hw *hw) * SPI EEPROM is assumed here. This code would need to * change if a future EEPROM is not SPI. */ - eeprom_size = (u16)((eec & IXGBE_EEC_SIZE) >> - IXGBE_EEC_SIZE_SHIFT); + eeprom_size = FIELD_GET(IXGBE_EEC_SIZE, eec); eeprom->word_size = BIT(eeprom_size + - IXGBE_EEPROM_WORD_SIZE_SHIFT); + IXGBE_EEPROM_WORD_SIZE_SHIFT); } if (eec & IXGBE_EEC_ADDR_SIZE) @@ -3946,10 +3945,10 @@ s32 ixgbe_get_thermal_sensor_data_generic(struct ixgbe_hw *hw) if (status) return status; - sensor_index = ((ets_sensor & IXGBE_ETS_DATA_INDEX_MASK) >> - IXGBE_ETS_DATA_INDEX_SHIFT); - sensor_location = ((ets_sensor & IXGBE_ETS_DATA_LOC_MASK) >> - IXGBE_ETS_DATA_LOC_SHIFT); + sensor_index = FIELD_GET(IXGBE_ETS_DATA_INDEX_MASK, + ets_sensor); + sensor_location = FIELD_GET(IXGBE_ETS_DATA_LOC_MASK, + ets_sensor); if (sensor_location != 0) { status = hw->phy.ops.read_i2c_byte(hw, @@ -3993,8 +3992,7 @@ s32 ixgbe_init_thermal_sensor_thresh_generic(struct ixgbe_hw *hw) if (status) return status; - low_thresh_delta = ((ets_cfg & IXGBE_ETS_LTHRES_DELTA_MASK) >> - IXGBE_ETS_LTHRES_DELTA_SHIFT); + low_thresh_delta = FIELD_GET(IXGBE_ETS_LTHRES_DELTA_MASK, ets_cfg); num_sensors = (ets_cfg & IXGBE_ETS_NUM_SENSORS_MASK); if (num_sensors > IXGBE_MAX_SENSORS) num_sensors = IXGBE_MAX_SENSORS; @@ -4008,10 +4006,10 @@ s32 ixgbe_init_thermal_sensor_thresh_generic(struct ixgbe_hw *hw) ets_offset + 1 + i); continue; } - sensor_index = ((ets_sensor & IXGBE_ETS_DATA_INDEX_MASK) >> - IXGBE_ETS_DATA_INDEX_SHIFT); - sensor_location = ((ets_sensor & IXGBE_ETS_DATA_LOC_MASK) >> - IXGBE_ETS_DATA_LOC_SHIFT); + sensor_index = FIELD_GET(IXGBE_ETS_DATA_INDEX_MASK, + ets_sensor); + sensor_location = FIELD_GET(IXGBE_ETS_DATA_LOC_MASK, + ets_sensor); therm_limit = ets_sensor & IXGBE_ETS_DATA_HTHRESH_MASK; hw->phy.ops.write_i2c_byte(hw, diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c index 94bde2cad0f4..227415d61efc 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c @@ -11371,7 +11371,7 @@ static pci_ers_result_t ixgbe_io_error_detected(struct pci_dev *pdev, if ((pf_func & 1) == (pdev->devfn & 1)) { unsigned int device_id; - vf = (req_id & 0x7F) >> 1; + vf = FIELD_GET(0x7F, req_id); e_dev_err("VF %d has caused a PCIe error\n", vf); e_dev_err("TLP: dw0: %8.8x\tdw1: %8.8x\tdw2: " "%8.8x\tdw3: %8.8x\n", diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c index 689470c1e8ad..ca31638c6fb8 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c @@ -276,9 +276,8 @@ s32 ixgbe_identify_phy_generic(struct ixgbe_hw *hw) return 0; if (hw->phy.nw_mng_if_sel) { - phy_addr = (hw->phy.nw_mng_if_sel & - IXGBE_NW_MNG_IF_SEL_MDIO_PHY_ADD) >> - IXGBE_NW_MNG_IF_SEL_MDIO_PHY_ADD_SHIFT; + phy_addr = FIELD_GET(IXGBE_NW_MNG_IF_SEL_MDIO_PHY_ADD, + hw->phy.nw_mng_if_sel); if (ixgbe_probe_phy(hw, phy_addr)) return 0; else @@ -1448,8 +1447,7 @@ s32 ixgbe_reset_phy_nl(struct ixgbe_hw *hw) ret_val = hw->eeprom.ops.read(hw, data_offset, &eword); if (ret_val) goto err_eeprom; - control = (eword & IXGBE_CONTROL_MASK_NL) >> - IXGBE_CONTROL_SHIFT_NL; + control = FIELD_GET(IXGBE_CONTROL_MASK_NL, eword); edata = eword & IXGBE_DATA_MASK_NL; switch (control) { case IXGBE_DELAY_NL: diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c index 9cfdfa8a4355..f8c6ca9fea82 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c @@ -363,8 +363,7 @@ int ixgbe_pci_sriov_configure(struct pci_dev *dev, int num_vfs) static int ixgbe_set_vf_multicasts(struct ixgbe_adapter *adapter, u32 *msgbuf, u32 vf) { - int entries = (msgbuf[0] & IXGBE_VT_MSGINFO_MASK) - >> IXGBE_VT_MSGINFO_SHIFT; + int entries = FIELD_GET(IXGBE_VT_MSGINFO_MASK, msgbuf[0]); u16 *hash_list = (u16 *)&msgbuf[1]; struct vf_data_storage *vfinfo = &adapter->vfinfo[vf]; struct ixgbe_hw *hw = &adapter->hw; @@ -969,7 +968,7 @@ static int ixgbe_set_vf_mac_addr(struct ixgbe_adapter *adapter, static int ixgbe_set_vf_vlan_msg(struct ixgbe_adapter *adapter, u32 *msgbuf, u32 vf) { - u32 add = (msgbuf[0] & IXGBE_VT_MSGINFO_MASK) >> IXGBE_VT_MSGINFO_SHIFT; + u32 add = FIELD_GET(IXGBE_VT_MSGINFO_MASK, msgbuf[0]); u32 vid = (msgbuf[1] & IXGBE_VLVF_VLANID_MASK); u8 tcs = adapter->hw_tcs; @@ -992,8 +991,7 @@ static int ixgbe_set_vf_macvlan_msg(struct ixgbe_adapter *adapter, u32 *msgbuf, u32 vf) { u8 *new_mac = ((u8 *)(&msgbuf[1])); - int index = (msgbuf[0] & IXGBE_VT_MSGINFO_MASK) >> - IXGBE_VT_MSGINFO_SHIFT; + int index = FIELD_GET(IXGBE_VT_MSGINFO_MASK, msgbuf[0]); int err; if (adapter->vfinfo[vf].pf_set_mac && !adapter->vfinfo[vf].trusted && diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_x540.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_x540.c index d5cfb51ff648..e127070a59f4 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_x540.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_x540.c @@ -187,16 +187,16 @@ s32 ixgbe_start_hw_X540(struct ixgbe_hw *hw) s32 ixgbe_init_eeprom_params_X540(struct ixgbe_hw *hw) { struct ixgbe_eeprom_info *eeprom = &hw->eeprom; - u32 eec; - u16 eeprom_size; if (eeprom->type == ixgbe_eeprom_uninitialized) { + u16 eeprom_size; + u32 eec; + eeprom->semaphore_delay = 10; eeprom->type = ixgbe_flash; eec = IXGBE_READ_REG(hw, IXGBE_EEC(hw)); - eeprom_size = (u16)((eec & IXGBE_EEC_SIZE) >> - IXGBE_EEC_SIZE_SHIFT); + eeprom_size = FIELD_GET(IXGBE_EEC_SIZE, eec); eeprom->word_size = BIT(eeprom_size + IXGBE_EEPROM_WORD_SIZE_SHIFT); diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c index aa4bf6c9a2f7..b3509b617a4e 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c @@ -628,16 +628,16 @@ static s32 ixgbe_fc_autoneg_fw(struct ixgbe_hw *hw) static s32 ixgbe_init_eeprom_params_X550(struct ixgbe_hw *hw) { struct ixgbe_eeprom_info *eeprom = &hw->eeprom; - u32 eec; - u16 eeprom_size; if (eeprom->type == ixgbe_eeprom_uninitialized) { + u16 eeprom_size; + u32 eec; + eeprom->semaphore_delay = 10; eeprom->type = ixgbe_flash; eec = IXGBE_READ_REG(hw, IXGBE_EEC(hw)); - eeprom_size = (u16)((eec & IXGBE_EEC_SIZE) >> - IXGBE_EEC_SIZE_SHIFT); + eeprom_size = FIELD_GET(IXGBE_EEC_SIZE, eec); eeprom->word_size = BIT(eeprom_size + IXGBE_EEPROM_WORD_SIZE_SHIFT); @@ -712,8 +712,7 @@ static s32 ixgbe_read_iosf_sb_reg_x550(struct ixgbe_hw *hw, u32 reg_addr, ret = ixgbe_iosf_wait(hw, &command); if ((command & IXGBE_SB_IOSF_CTRL_RESP_STAT_MASK) != 0) { - error = (command & IXGBE_SB_IOSF_CTRL_CMPL_ERR_MASK) >> - IXGBE_SB_IOSF_CTRL_CMPL_ERR_SHIFT; + error = FIELD_GET(IXGBE_SB_IOSF_CTRL_CMPL_ERR_MASK, command); hw_dbg(hw, "Failed to read, error %x\n", error); return IXGBE_ERR_PHY; } @@ -1412,8 +1411,7 @@ static s32 ixgbe_write_iosf_sb_reg_x550(struct ixgbe_hw *hw, u32 reg_addr, ret = ixgbe_iosf_wait(hw, &command); if ((command & IXGBE_SB_IOSF_CTRL_RESP_STAT_MASK) != 0) { - error = (command & IXGBE_SB_IOSF_CTRL_CMPL_ERR_MASK) >> - IXGBE_SB_IOSF_CTRL_CMPL_ERR_SHIFT; + error = FIELD_GET(IXGBE_SB_IOSF_CTRL_CMPL_ERR_MASK, command); hw_dbg(hw, "Failed to write, error %x\n", error); return IXGBE_ERR_PHY; } @@ -3222,9 +3220,8 @@ static void ixgbe_read_mng_if_sel_x550em(struct ixgbe_hw *hw) */ if (hw->mac.type == ixgbe_mac_x550em_a && hw->phy.nw_mng_if_sel & IXGBE_NW_MNG_IF_SEL_MDIO_ACT) { - hw->phy.mdio.prtad = (hw->phy.nw_mng_if_sel & - IXGBE_NW_MNG_IF_SEL_MDIO_PHY_ADD) >> - IXGBE_NW_MNG_IF_SEL_MDIO_PHY_ADD_SHIFT; + hw->phy.mdio.prtad = FIELD_GET(IXGBE_NW_MNG_IF_SEL_MDIO_PHY_ADD, + hw->phy.nw_mng_if_sel); } } From patchwork Wed Dec 6 01:01:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jesse Brandeburg X-Patchwork-Id: 13480917 X-Patchwork-Delegate: kuba@kernel.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="VlFn4hEL" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 606F61A4 for ; Tue, 5 Dec 2023 17:01:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1701824500; x=1733360500; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=gaxU5pnC2jcs9AjhNbkSjjHHEQSQMSvr3xD2J9iABgE=; b=VlFn4hELZL4zICJXAn3kNhZy9y3D7V9t1pKmESzt+0DkJefSjNeJ1QUG N3HGzp+AiimhMrd88QzjtjLTgKZRqPrWI7nLgNIe599FBBwTL7ZyfCGR+ I6cugPNoc4fgzNEfWUk2IINQMagd4UFTvTcRuhqSFRKvq+mSEjr8kIK1A ild31pRjABA+g7OxPpsUoWmKtMktONuRPCGOYdmc45fDAnT0P001qCkxP jPusLaW94H9K6Lbw1L0gtRC24ckwVVHwc279CLr0fHU9SomZUheK8rWn6 pWc44MRQuulhtM6yjGa4nh5J0C6B2eagSlL6li92R8dEchhgxmrgBCvV/ A==; X-IronPort-AV: E=McAfee;i="6600,9927,10915"; a="12700310" X-IronPort-AV: E=Sophos;i="6.04,254,1695711600"; d="scan'208";a="12700310" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Dec 2023 17:01:35 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10915"; a="841655270" X-IronPort-AV: E=Sophos;i="6.04,254,1695711600"; d="scan'208";a="841655270" Received: from jbrandeb-spr1.jf.intel.com ([10.166.28.233]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Dec 2023 17:01:34 -0800 From: Jesse Brandeburg To: intel-wired-lan@lists.osuosl.org Cc: Jesse Brandeburg , netdev@vger.kernel.org, aleksander.lobakin@intel.com, przemyslaw.kitszel@intel.com, horms@kernel.org, marcin.szycik@linux.intel.com, Julia Lawall Subject: [PATCH iwl-next v2 10/15] igc: field get conversion Date: Tue, 5 Dec 2023 17:01:09 -0800 Message-Id: <20231206010114.2259388-11-jesse.brandeburg@intel.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20231206010114.2259388-1-jesse.brandeburg@intel.com> References: <20231206010114.2259388-1-jesse.brandeburg@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Refactor the igc driver to use FIELD_GET() for mask and shift reads, which reduces lines of code and adds clarity of intent. This code was generated by the following coccinelle/spatch script and then manually repaired in a later patch. @get@ constant shift,mask; type T; expression a; @@ -((T)((a) & mask) >> shift) +FIELD_GET(mask, a) and applied via: spatch --sp-file field_prep.cocci --in-place --dir \ drivers/net/ethernet/intel/ Cc: Julia Lawall Reviewed-by: Marcin Szycik Reviewed-by: Simon Horman Signed-off-by: Jesse Brandeburg --- drivers/net/ethernet/intel/igc/igc_base.c | 6 ++---- drivers/net/ethernet/intel/igc/igc_i225.c | 5 ++--- drivers/net/ethernet/intel/igc/igc_main.c | 6 ++---- drivers/net/ethernet/intel/igc/igc_phy.c | 4 ++-- 4 files changed, 8 insertions(+), 13 deletions(-) diff --git a/drivers/net/ethernet/intel/igc/igc_base.c b/drivers/net/ethernet/intel/igc/igc_base.c index a1d815af507d..9fae8bdec2a7 100644 --- a/drivers/net/ethernet/intel/igc/igc_base.c +++ b/drivers/net/ethernet/intel/igc/igc_base.c @@ -68,8 +68,7 @@ static s32 igc_init_nvm_params_base(struct igc_hw *hw) u32 eecd = rd32(IGC_EECD); u16 size; - size = (u16)((eecd & IGC_EECD_SIZE_EX_MASK) >> - IGC_EECD_SIZE_EX_SHIFT); + size = FIELD_GET(IGC_EECD_SIZE_EX_MASK, eecd); /* Added to a constant, "size" becomes the left-shift value * for setting word_size. @@ -162,8 +161,7 @@ static s32 igc_init_phy_params_base(struct igc_hw *hw) phy->reset_delay_us = 100; /* set lan id */ - hw->bus.func = (rd32(IGC_STATUS) & IGC_STATUS_FUNC_MASK) >> - IGC_STATUS_FUNC_SHIFT; + hw->bus.func = FIELD_GET(IGC_STATUS_FUNC_MASK, rd32(IGC_STATUS)); /* Make sure the PHY is in a good state. Several people have reported * firmware leaving the PHY's page select register set to something diff --git a/drivers/net/ethernet/intel/igc/igc_i225.c b/drivers/net/ethernet/intel/igc/igc_i225.c index d2562c8e8015..0dd61719f1ed 100644 --- a/drivers/net/ethernet/intel/igc/igc_i225.c +++ b/drivers/net/ethernet/intel/igc/igc_i225.c @@ -579,9 +579,8 @@ s32 igc_set_ltr_i225(struct igc_hw *hw, bool link) /* Calculate tw_system (nsec). */ if (speed == SPEED_100) { - tw_system = ((rd32(IGC_EEE_SU) & - IGC_TW_SYSTEM_100_MASK) >> - IGC_TW_SYSTEM_100_SHIFT) * 500; + tw_system = FIELD_GET(IGC_TW_SYSTEM_100_MASK, + rd32(IGC_EEE_SU)) * 500; } else { tw_system = (rd32(IGC_EEE_SU) & IGC_TW_SYSTEM_1000_MASK) * 500; diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c index d949289a3ddb..ba8d3fe186ae 100644 --- a/drivers/net/ethernet/intel/igc/igc_main.c +++ b/drivers/net/ethernet/intel/igc/igc_main.c @@ -3712,8 +3712,7 @@ static int igc_enable_nfc_rule(struct igc_adapter *adapter, } if (rule->filter.match_flags & IGC_FILTER_FLAG_VLAN_TCI) { - int prio = (rule->filter.vlan_tci & VLAN_PRIO_MASK) >> - VLAN_PRIO_SHIFT; + int prio = FIELD_GET(VLAN_PRIO_MASK, rule->filter.vlan_tci); err = igc_add_vlan_prio_filter(adapter, prio, rule->action); if (err) @@ -3735,8 +3734,7 @@ static void igc_disable_nfc_rule(struct igc_adapter *adapter, igc_del_etype_filter(adapter, rule->filter.etype); if (rule->filter.match_flags & IGC_FILTER_FLAG_VLAN_TCI) { - int prio = (rule->filter.vlan_tci & VLAN_PRIO_MASK) >> - VLAN_PRIO_SHIFT; + int prio = FIELD_GET(VLAN_PRIO_MASK, rule->filter.vlan_tci); igc_del_vlan_prio_filter(adapter, prio); } diff --git a/drivers/net/ethernet/intel/igc/igc_phy.c b/drivers/net/ethernet/intel/igc/igc_phy.c index d0d9e7170154..7cd8716d2ffa 100644 --- a/drivers/net/ethernet/intel/igc/igc_phy.c +++ b/drivers/net/ethernet/intel/igc/igc_phy.c @@ -727,7 +727,7 @@ static s32 igc_write_xmdio_reg(struct igc_hw *hw, u16 addr, */ s32 igc_write_phy_reg_gpy(struct igc_hw *hw, u32 offset, u16 data) { - u8 dev_addr = (offset & GPY_MMD_MASK) >> GPY_MMD_SHIFT; + u8 dev_addr = FIELD_GET(GPY_MMD_MASK, offset); s32 ret_val; offset = offset & GPY_REG_MASK; @@ -758,7 +758,7 @@ s32 igc_write_phy_reg_gpy(struct igc_hw *hw, u32 offset, u16 data) */ s32 igc_read_phy_reg_gpy(struct igc_hw *hw, u32 offset, u16 *data) { - u8 dev_addr = (offset & GPY_MMD_MASK) >> GPY_MMD_SHIFT; + u8 dev_addr = FIELD_GET(GPY_MMD_MASK, offset); s32 ret_val; offset = offset & GPY_REG_MASK; From patchwork Wed Dec 6 01:01:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jesse Brandeburg X-Patchwork-Id: 13480923 X-Patchwork-Delegate: kuba@kernel.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="hfgWj3sy" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 301D31B8 for ; Tue, 5 Dec 2023 17:01:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1701824501; x=1733360501; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=QubS/fUhTIVkJsI2NhvpcMqPA7621vgbrJ9Id7T9mnU=; b=hfgWj3syRZ5NASW7AOPDLMHsxT0d4ilZLfjK6Uu4SaxssRuEC5HJV62c vEJ6GIyqNdfNwOL2rMV//JUB/nXQxNpnMIVWYo6SzwtoFdv4buQcLLxOf 7MmXiw2bob79A6IJy+s5u23eQ+lroROnO/1rsStkCnwgBLIbo0ttHrY+W 3Gldflc++v8kpMXkzKjtnn0nb3QTmxecNesWbv/rBB+7HH8XfBPt+tkR5 8Wii9DmZOE9QHPxQqmkMGHBRsliK+qNd/duXl4JqIeHGlAvsqHuYKu8/S pgCiSD6oZjEkwdudPDcyT8MrnKaSHolJB1M3m62TT0V1oHBYTedzieUZK w==; X-IronPort-AV: E=McAfee;i="6600,9927,10915"; a="12700316" X-IronPort-AV: E=Sophos;i="6.04,254,1695711600"; d="scan'208";a="12700316" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Dec 2023 17:01:36 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10915"; a="841655277" X-IronPort-AV: E=Sophos;i="6.04,254,1695711600"; d="scan'208";a="841655277" Received: from jbrandeb-spr1.jf.intel.com ([10.166.28.233]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Dec 2023 17:01:34 -0800 From: Jesse Brandeburg To: intel-wired-lan@lists.osuosl.org Cc: Jesse Brandeburg , netdev@vger.kernel.org, aleksander.lobakin@intel.com, przemyslaw.kitszel@intel.com, horms@kernel.org, marcin.szycik@linux.intel.com, Julia Lawall , Aleksandr Loktionov Subject: [PATCH iwl-next v2 11/15] i40e: field get conversion Date: Tue, 5 Dec 2023 17:01:10 -0800 Message-Id: <20231206010114.2259388-12-jesse.brandeburg@intel.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20231206010114.2259388-1-jesse.brandeburg@intel.com> References: <20231206010114.2259388-1-jesse.brandeburg@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Refactor the i40e driver to use FIELD_GET() for mask and shift reads, which reduces lines of code and adds clarity of intent. This code was generated by the following coccinelle/spatch script and then manually repaired. While making one of the conversions, an if() check was inverted to return early and avoid un-necessary indentation of the remainder of the function. In some other cases a stack variable was moved inside the block where it was used while doing cleanups/review. A couple places were changed to use le16_get_bits() instead of FIELD_GET with a le16_to_cpu combination. @get@ constant shift,mask; metavariable type T; expression a; @@ -(((T)(a) & mask) >> shift) +FIELD_GET(mask, a) and applied via: spatch --sp-file field_prep.cocci --in-place --dir \ drivers/net/ethernet/intel/ Cc: Julia Lawall Reviewed-by: Aleksandr Loktionov Reviewed-by: Marcin Szycik Reviewed-by: Simon Horman Signed-off-by: Jesse Brandeburg Tested-by: Pucha Himasekhar Reddy (A Contingent worker at Intel) --- v2: add a couple get_bits --- drivers/net/ethernet/intel/i40e/i40e_common.c | 56 +++---- drivers/net/ethernet/intel/i40e/i40e_dcb.c | 158 +++++++----------- drivers/net/ethernet/intel/i40e/i40e_dcb_nl.c | 3 +- drivers/net/ethernet/intel/i40e/i40e_ddp.c | 4 +- .../net/ethernet/intel/i40e/i40e_ethtool.c | 7 +- drivers/net/ethernet/intel/i40e/i40e_main.c | 73 ++++---- drivers/net/ethernet/intel/i40e/i40e_nvm.c | 13 +- drivers/net/ethernet/intel/i40e/i40e_ptp.c | 4 +- drivers/net/ethernet/intel/i40e/i40e_txrx.c | 29 ++-- .../ethernet/intel/i40e/i40e_virtchnl_pf.c | 21 ++- drivers/net/ethernet/intel/i40e/i40e_xsk.c | 3 +- 11 files changed, 145 insertions(+), 226 deletions(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e_common.c b/drivers/net/ethernet/intel/i40e/i40e_common.c index 4ec4ab2c7d48..de6ca6295742 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_common.c +++ b/drivers/net/ethernet/intel/i40e/i40e_common.c @@ -664,11 +664,11 @@ int i40e_init_shared_code(struct i40e_hw *hw) hw->phy.get_link_info = true; /* Determine port number and PF number*/ - port = (rd32(hw, I40E_PFGEN_PORTNUM) & I40E_PFGEN_PORTNUM_PORT_NUM_MASK) - >> I40E_PFGEN_PORTNUM_PORT_NUM_SHIFT; + port = FIELD_GET(I40E_PFGEN_PORTNUM_PORT_NUM_MASK, + rd32(hw, I40E_PFGEN_PORTNUM)); hw->port = (u8)port; - ari = (rd32(hw, I40E_GLPCI_CAPSUP) & I40E_GLPCI_CAPSUP_ARI_EN_MASK) >> - I40E_GLPCI_CAPSUP_ARI_EN_SHIFT; + ari = FIELD_GET(I40E_GLPCI_CAPSUP_ARI_EN_MASK, + rd32(hw, I40E_GLPCI_CAPSUP)); func_rid = rd32(hw, I40E_PF_FUNC_RID); if (ari) hw->pf_id = (u8)(func_rid & 0xff); @@ -986,9 +986,8 @@ int i40e_pf_reset(struct i40e_hw *hw) * The grst delay value is in 100ms units, and we'll wait a * couple counts longer to be sure we don't just miss the end. */ - grst_del = (rd32(hw, I40E_GLGEN_RSTCTL) & - I40E_GLGEN_RSTCTL_GRSTDEL_MASK) >> - I40E_GLGEN_RSTCTL_GRSTDEL_SHIFT; + grst_del = FIELD_GET(I40E_GLGEN_RSTCTL_GRSTDEL_MASK, + rd32(hw, I40E_GLGEN_RSTCTL)); /* It can take upto 15 secs for GRST steady state. * Bump it to 16 secs max to be safe. @@ -1080,26 +1079,20 @@ void i40e_clear_hw(struct i40e_hw *hw) /* get number of interrupts, queues, and VFs */ val = rd32(hw, I40E_GLPCI_CNF2); - num_pf_int = (val & I40E_GLPCI_CNF2_MSI_X_PF_N_MASK) >> - I40E_GLPCI_CNF2_MSI_X_PF_N_SHIFT; - num_vf_int = (val & I40E_GLPCI_CNF2_MSI_X_VF_N_MASK) >> - I40E_GLPCI_CNF2_MSI_X_VF_N_SHIFT; + num_pf_int = FIELD_GET(I40E_GLPCI_CNF2_MSI_X_PF_N_MASK, val); + num_vf_int = FIELD_GET(I40E_GLPCI_CNF2_MSI_X_VF_N_MASK, val); val = rd32(hw, I40E_PFLAN_QALLOC); - base_queue = (val & I40E_PFLAN_QALLOC_FIRSTQ_MASK) >> - I40E_PFLAN_QALLOC_FIRSTQ_SHIFT; - j = (val & I40E_PFLAN_QALLOC_LASTQ_MASK) >> - I40E_PFLAN_QALLOC_LASTQ_SHIFT; + base_queue = FIELD_GET(I40E_PFLAN_QALLOC_FIRSTQ_MASK, val); + j = FIELD_GET(I40E_PFLAN_QALLOC_LASTQ_MASK, val); if (val & I40E_PFLAN_QALLOC_VALID_MASK && j >= base_queue) num_queues = (j - base_queue) + 1; else num_queues = 0; val = rd32(hw, I40E_PF_VT_PFALLOC); - i = (val & I40E_PF_VT_PFALLOC_FIRSTVF_MASK) >> - I40E_PF_VT_PFALLOC_FIRSTVF_SHIFT; - j = (val & I40E_PF_VT_PFALLOC_LASTVF_MASK) >> - I40E_PF_VT_PFALLOC_LASTVF_SHIFT; + i = FIELD_GET(I40E_PF_VT_PFALLOC_FIRSTVF_MASK, val); + j = FIELD_GET(I40E_PF_VT_PFALLOC_LASTVF_MASK, val); if (val & I40E_PF_VT_PFALLOC_VALID_MASK && j >= i) num_vfs = (j - i) + 1; else @@ -1194,8 +1187,7 @@ static u32 i40e_led_is_mine(struct i40e_hw *hw, int idx) !hw->func_caps.led[idx]) return 0; gpio_val = rd32(hw, I40E_GLGEN_GPIO_CTL(idx)); - port = (gpio_val & I40E_GLGEN_GPIO_CTL_PRT_NUM_MASK) >> - I40E_GLGEN_GPIO_CTL_PRT_NUM_SHIFT; + port = FIELD_GET(I40E_GLGEN_GPIO_CTL_PRT_NUM_MASK, gpio_val); /* if PRT_NUM_NA is 1 then this LED is not port specific, OR * if it is not our port then ignore @@ -1239,8 +1231,7 @@ u32 i40e_led_get(struct i40e_hw *hw) if (!gpio_val) continue; - mode = (gpio_val & I40E_GLGEN_GPIO_CTL_LED_MODE_MASK) >> - I40E_GLGEN_GPIO_CTL_LED_MODE_SHIFT; + mode = FIELD_GET(I40E_GLGEN_GPIO_CTL_LED_MODE_MASK, gpio_val); break; } @@ -4190,8 +4181,7 @@ i40e_validate_filter_settings(struct i40e_hw *hw, /* FCHSIZE + FCDSIZE should not be greater than PMFCOEFMAX */ val = rd32(hw, I40E_GLHMC_FCOEFMAX); - fcoe_fmax = (val & I40E_GLHMC_FCOEFMAX_PMFCOEFMAX_MASK) - >> I40E_GLHMC_FCOEFMAX_PMFCOEFMAX_SHIFT; + fcoe_fmax = FIELD_GET(I40E_GLHMC_FCOEFMAX_PMFCOEFMAX_MASK, val); if (fcoe_filt_size + fcoe_cntx_size > fcoe_fmax) return -EINVAL; @@ -4646,8 +4636,7 @@ int i40e_read_phy_register_clause22(struct i40e_hw *hw, "PHY: Can't write command to external PHY.\n"); } else { command = rd32(hw, I40E_GLGEN_MSRWD(port_num)); - *value = (command & I40E_GLGEN_MSRWD_MDIRDDATA_MASK) >> - I40E_GLGEN_MSRWD_MDIRDDATA_SHIFT; + *value = FIELD_GET(I40E_GLGEN_MSRWD_MDIRDDATA_MASK, command); } return status; @@ -4756,8 +4745,7 @@ int i40e_read_phy_register_clause45(struct i40e_hw *hw, if (!status) { command = rd32(hw, I40E_GLGEN_MSRWD(port_num)); - *value = (command & I40E_GLGEN_MSRWD_MDIRDDATA_MASK) >> - I40E_GLGEN_MSRWD_MDIRDDATA_SHIFT; + *value = FIELD_GET(I40E_GLGEN_MSRWD_MDIRDDATA_MASK, command); } else { i40e_debug(hw, I40E_DEBUG_PHY, "PHY: Can't read register value from external PHY.\n"); @@ -5902,9 +5890,8 @@ i40e_aq_add_cloud_filters_bb(struct i40e_hw *hw, u16 seid, u16 tnl_type; u32 ti; - tnl_type = (le16_to_cpu(filters[i].element.flags) & - I40E_AQC_ADD_CLOUD_TNL_TYPE_MASK) >> - I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT; + tnl_type = le16_get_bits(filters[i].element.flags, + I40E_AQC_ADD_CLOUD_TNL_TYPE_MASK); /* Due to hardware eccentricities, the VNI for Geneve is shifted * one more byte further than normally used for Tenant ID in @@ -5996,9 +5983,8 @@ i40e_aq_rem_cloud_filters_bb(struct i40e_hw *hw, u16 seid, u16 tnl_type; u32 ti; - tnl_type = (le16_to_cpu(filters[i].element.flags) & - I40E_AQC_ADD_CLOUD_TNL_TYPE_MASK) >> - I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT; + tnl_type = le16_get_bits(filters[i].element.flags, + I40E_AQC_ADD_CLOUD_TNL_TYPE_MASK); /* Due to hardware eccentricities, the VNI for Geneve is shifted * one more byte further than normally used for Tenant ID in diff --git a/drivers/net/ethernet/intel/i40e/i40e_dcb.c b/drivers/net/ethernet/intel/i40e/i40e_dcb.c index a0691b7c87c4..9d88ed6105fd 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_dcb.c +++ b/drivers/net/ethernet/intel/i40e/i40e_dcb.c @@ -22,8 +22,7 @@ int i40e_get_dcbx_status(struct i40e_hw *hw, u16 *status) return -EINVAL; reg = rd32(hw, I40E_PRTDCB_GENS); - *status = (u16)((reg & I40E_PRTDCB_GENS_DCBX_STATUS_MASK) >> - I40E_PRTDCB_GENS_DCBX_STATUS_SHIFT); + *status = FIELD_GET(I40E_PRTDCB_GENS_DCBX_STATUS_MASK, reg); return 0; } @@ -52,12 +51,9 @@ static void i40e_parse_ieee_etscfg_tlv(struct i40e_lldp_org_tlv *tlv, * |1bit | 1bit|3 bits|3bits| */ etscfg = &dcbcfg->etscfg; - etscfg->willing = (u8)((buf[offset] & I40E_IEEE_ETS_WILLING_MASK) >> - I40E_IEEE_ETS_WILLING_SHIFT); - etscfg->cbs = (u8)((buf[offset] & I40E_IEEE_ETS_CBS_MASK) >> - I40E_IEEE_ETS_CBS_SHIFT); - etscfg->maxtcs = (u8)((buf[offset] & I40E_IEEE_ETS_MAXTC_MASK) >> - I40E_IEEE_ETS_MAXTC_SHIFT); + etscfg->willing = FIELD_GET(I40E_IEEE_ETS_WILLING_MASK, buf[offset]); + etscfg->cbs = FIELD_GET(I40E_IEEE_ETS_CBS_MASK, buf[offset]); + etscfg->maxtcs = FIELD_GET(I40E_IEEE_ETS_MAXTC_MASK, buf[offset]); /* Move offset to Priority Assignment Table */ offset++; @@ -71,11 +67,9 @@ static void i40e_parse_ieee_etscfg_tlv(struct i40e_lldp_org_tlv *tlv, * ----------------------------------------- */ for (i = 0; i < 4; i++) { - priority = (u8)((buf[offset] & I40E_IEEE_ETS_PRIO_1_MASK) >> - I40E_IEEE_ETS_PRIO_1_SHIFT); - etscfg->prioritytable[i * 2] = priority; - priority = (u8)((buf[offset] & I40E_IEEE_ETS_PRIO_0_MASK) >> - I40E_IEEE_ETS_PRIO_0_SHIFT); + priority = FIELD_GET(I40E_IEEE_ETS_PRIO_1_MASK, buf[offset]); + etscfg->prioritytable[i * 2] = priority; + priority = FIELD_GET(I40E_IEEE_ETS_PRIO_0_MASK, buf[offset]); etscfg->prioritytable[i * 2 + 1] = priority; offset++; } @@ -126,12 +120,10 @@ static void i40e_parse_ieee_etsrec_tlv(struct i40e_lldp_org_tlv *tlv, * ----------------------------------------- */ for (i = 0; i < 4; i++) { - priority = (u8)((buf[offset] & I40E_IEEE_ETS_PRIO_1_MASK) >> - I40E_IEEE_ETS_PRIO_1_SHIFT); - dcbcfg->etsrec.prioritytable[i*2] = priority; - priority = (u8)((buf[offset] & I40E_IEEE_ETS_PRIO_0_MASK) >> - I40E_IEEE_ETS_PRIO_0_SHIFT); - dcbcfg->etsrec.prioritytable[i*2 + 1] = priority; + priority = FIELD_GET(I40E_IEEE_ETS_PRIO_1_MASK, buf[offset]); + dcbcfg->etsrec.prioritytable[i * 2] = priority; + priority = FIELD_GET(I40E_IEEE_ETS_PRIO_0_MASK, buf[offset]); + dcbcfg->etsrec.prioritytable[(i * 2) + 1] = priority; offset++; } @@ -172,12 +164,9 @@ static void i40e_parse_ieee_pfccfg_tlv(struct i40e_lldp_org_tlv *tlv, * ----------------------------------------- * |1bit | 1bit|2 bits|4bits| 1 octet | */ - dcbcfg->pfc.willing = (u8)((buf[0] & I40E_IEEE_PFC_WILLING_MASK) >> - I40E_IEEE_PFC_WILLING_SHIFT); - dcbcfg->pfc.mbc = (u8)((buf[0] & I40E_IEEE_PFC_MBC_MASK) >> - I40E_IEEE_PFC_MBC_SHIFT); - dcbcfg->pfc.pfccap = (u8)((buf[0] & I40E_IEEE_PFC_CAP_MASK) >> - I40E_IEEE_PFC_CAP_SHIFT); + dcbcfg->pfc.willing = FIELD_GET(I40E_IEEE_PFC_WILLING_MASK, buf[0]); + dcbcfg->pfc.mbc = FIELD_GET(I40E_IEEE_PFC_MBC_MASK, buf[0]); + dcbcfg->pfc.pfccap = FIELD_GET(I40E_IEEE_PFC_CAP_MASK, buf[0]); dcbcfg->pfc.pfcenable = buf[1]; } @@ -198,8 +187,7 @@ static void i40e_parse_ieee_app_tlv(struct i40e_lldp_org_tlv *tlv, u8 *buf; typelength = ntohs(tlv->typelength); - length = (u16)((typelength & I40E_LLDP_TLV_LEN_MASK) >> - I40E_LLDP_TLV_LEN_SHIFT); + length = FIELD_GET(I40E_LLDP_TLV_LEN_MASK, typelength); buf = tlv->tlvinfo; /* The App priority table starts 5 octets after TLV header */ @@ -217,12 +205,10 @@ static void i40e_parse_ieee_app_tlv(struct i40e_lldp_org_tlv *tlv, * ----------------------------------------- */ while (offset < length) { - dcbcfg->app[i].priority = (u8)((buf[offset] & - I40E_IEEE_APP_PRIO_MASK) >> - I40E_IEEE_APP_PRIO_SHIFT); - dcbcfg->app[i].selector = (u8)((buf[offset] & - I40E_IEEE_APP_SEL_MASK) >> - I40E_IEEE_APP_SEL_SHIFT); + dcbcfg->app[i].priority = FIELD_GET(I40E_IEEE_APP_PRIO_MASK, + buf[offset]); + dcbcfg->app[i].selector = FIELD_GET(I40E_IEEE_APP_SEL_MASK, + buf[offset]); dcbcfg->app[i].protocolid = (buf[offset + 1] << 0x8) | buf[offset + 2]; /* Move to next app */ @@ -250,8 +236,7 @@ static void i40e_parse_ieee_tlv(struct i40e_lldp_org_tlv *tlv, u8 subtype; ouisubtype = ntohl(tlv->ouisubtype); - subtype = (u8)((ouisubtype & I40E_LLDP_TLV_SUBTYPE_MASK) >> - I40E_LLDP_TLV_SUBTYPE_SHIFT); + subtype = FIELD_GET(I40E_LLDP_TLV_SUBTYPE_MASK, ouisubtype); switch (subtype) { case I40E_IEEE_SUBTYPE_ETS_CFG: i40e_parse_ieee_etscfg_tlv(tlv, dcbcfg); @@ -301,11 +286,9 @@ static void i40e_parse_cee_pgcfg_tlv(struct i40e_cee_feat_tlv *tlv, * ----------------------------------------- */ for (i = 0; i < 4; i++) { - priority = (u8)((buf[offset] & I40E_CEE_PGID_PRIO_1_MASK) >> - I40E_CEE_PGID_PRIO_1_SHIFT); - etscfg->prioritytable[i * 2] = priority; - priority = (u8)((buf[offset] & I40E_CEE_PGID_PRIO_0_MASK) >> - I40E_CEE_PGID_PRIO_0_SHIFT); + priority = FIELD_GET(I40E_CEE_PGID_PRIO_1_MASK, buf[offset]); + etscfg->prioritytable[i * 2] = priority; + priority = FIELD_GET(I40E_CEE_PGID_PRIO_0_MASK, buf[offset]); etscfg->prioritytable[i * 2 + 1] = priority; offset++; } @@ -362,8 +345,7 @@ static void i40e_parse_cee_app_tlv(struct i40e_cee_feat_tlv *tlv, u8 i; typelength = ntohs(tlv->hdr.typelen); - length = (u16)((typelength & I40E_LLDP_TLV_LEN_MASK) >> - I40E_LLDP_TLV_LEN_SHIFT); + length = FIELD_GET(I40E_LLDP_TLV_LEN_MASK, typelength); dcbcfg->numapps = length / sizeof(*app); @@ -419,15 +401,13 @@ static void i40e_parse_cee_tlv(struct i40e_lldp_org_tlv *tlv, u32 ouisubtype; ouisubtype = ntohl(tlv->ouisubtype); - subtype = (u8)((ouisubtype & I40E_LLDP_TLV_SUBTYPE_MASK) >> - I40E_LLDP_TLV_SUBTYPE_SHIFT); + subtype = FIELD_GET(I40E_LLDP_TLV_SUBTYPE_MASK, ouisubtype); /* Return if not CEE DCBX */ if (subtype != I40E_CEE_DCBX_TYPE) return; typelength = ntohs(tlv->typelength); - tlvlen = (u16)((typelength & I40E_LLDP_TLV_LEN_MASK) >> - I40E_LLDP_TLV_LEN_SHIFT); + tlvlen = FIELD_GET(I40E_LLDP_TLV_LEN_MASK, typelength); len = sizeof(tlv->typelength) + sizeof(ouisubtype) + sizeof(struct i40e_cee_ctrl_tlv); /* Return if no CEE DCBX Feature TLVs */ @@ -437,11 +417,8 @@ static void i40e_parse_cee_tlv(struct i40e_lldp_org_tlv *tlv, sub_tlv = (struct i40e_cee_feat_tlv *)((char *)tlv + len); while (feat_tlv_count < I40E_CEE_MAX_FEAT_TYPE) { typelength = ntohs(sub_tlv->hdr.typelen); - sublen = (u16)((typelength & - I40E_LLDP_TLV_LEN_MASK) >> - I40E_LLDP_TLV_LEN_SHIFT); - subtype = (u8)((typelength & I40E_LLDP_TLV_TYPE_MASK) >> - I40E_LLDP_TLV_TYPE_SHIFT); + sublen = FIELD_GET(I40E_LLDP_TLV_LEN_MASK, typelength); + subtype = FIELD_GET(I40E_LLDP_TLV_TYPE_MASK, typelength); switch (subtype) { case I40E_CEE_SUBTYPE_PG_CFG: i40e_parse_cee_pgcfg_tlv(sub_tlv, dcbcfg); @@ -478,8 +455,7 @@ static void i40e_parse_org_tlv(struct i40e_lldp_org_tlv *tlv, u32 oui; ouisubtype = ntohl(tlv->ouisubtype); - oui = (u32)((ouisubtype & I40E_LLDP_TLV_OUI_MASK) >> - I40E_LLDP_TLV_OUI_SHIFT); + oui = FIELD_GET(I40E_LLDP_TLV_OUI_MASK, ouisubtype); switch (oui) { case I40E_IEEE_8021QAZ_OUI: i40e_parse_ieee_tlv(tlv, dcbcfg); @@ -517,10 +493,8 @@ int i40e_lldp_to_dcb_config(u8 *lldpmib, tlv = (struct i40e_lldp_org_tlv *)lldpmib; while (1) { typelength = ntohs(tlv->typelength); - type = (u16)((typelength & I40E_LLDP_TLV_TYPE_MASK) >> - I40E_LLDP_TLV_TYPE_SHIFT); - length = (u16)((typelength & I40E_LLDP_TLV_LEN_MASK) >> - I40E_LLDP_TLV_LEN_SHIFT); + type = FIELD_GET(I40E_LLDP_TLV_TYPE_MASK, typelength); + length = FIELD_GET(I40E_LLDP_TLV_LEN_MASK, typelength); offset += sizeof(typelength) + length; /* END TLV or beyond LLDPDU size */ @@ -594,7 +568,7 @@ static void i40e_cee_to_dcb_v1_config( { u16 status, tlv_status = le16_to_cpu(cee_cfg->tlv_status); u16 app_prio = le16_to_cpu(cee_cfg->oper_app_prio); - u8 i, tc, err; + u8 i, err; /* CEE PG data to ETS config */ dcbcfg->etscfg.maxtcs = cee_cfg->oper_num_tc; @@ -603,13 +577,13 @@ static void i40e_cee_to_dcb_v1_config( * from those in the CEE Priority Group sub-TLV. */ for (i = 0; i < 4; i++) { - tc = (u8)((cee_cfg->oper_prio_tc[i] & - I40E_CEE_PGID_PRIO_0_MASK) >> - I40E_CEE_PGID_PRIO_0_SHIFT); - dcbcfg->etscfg.prioritytable[i * 2] = tc; - tc = (u8)((cee_cfg->oper_prio_tc[i] & - I40E_CEE_PGID_PRIO_1_MASK) >> - I40E_CEE_PGID_PRIO_1_SHIFT); + u8 tc; + + tc = FIELD_GET(I40E_CEE_PGID_PRIO_0_MASK, + cee_cfg->oper_prio_tc[i]); + dcbcfg->etscfg.prioritytable[i * 2] = tc; + tc = FIELD_GET(I40E_CEE_PGID_PRIO_1_MASK, + cee_cfg->oper_prio_tc[i]); dcbcfg->etscfg.prioritytable[i*2 + 1] = tc; } @@ -631,8 +605,7 @@ static void i40e_cee_to_dcb_v1_config( dcbcfg->pfc.pfcenable = cee_cfg->oper_pfc_en; dcbcfg->pfc.pfccap = I40E_MAX_TRAFFIC_CLASS; - status = (tlv_status & I40E_AQC_CEE_APP_STATUS_MASK) >> - I40E_AQC_CEE_APP_STATUS_SHIFT; + status = FIELD_GET(I40E_AQC_CEE_APP_STATUS_MASK, tlv_status); err = (status & I40E_TLV_STATUS_ERR) ? 1 : 0; /* Add APPs if Error is False */ if (!err) { @@ -641,22 +614,19 @@ static void i40e_cee_to_dcb_v1_config( /* FCoE APP */ dcbcfg->app[0].priority = - (app_prio & I40E_AQC_CEE_APP_FCOE_MASK) >> - I40E_AQC_CEE_APP_FCOE_SHIFT; + FIELD_GET(I40E_AQC_CEE_APP_FCOE_MASK, app_prio); dcbcfg->app[0].selector = I40E_APP_SEL_ETHTYPE; dcbcfg->app[0].protocolid = I40E_APP_PROTOID_FCOE; /* iSCSI APP */ dcbcfg->app[1].priority = - (app_prio & I40E_AQC_CEE_APP_ISCSI_MASK) >> - I40E_AQC_CEE_APP_ISCSI_SHIFT; + FIELD_GET(I40E_AQC_CEE_APP_ISCSI_MASK, app_prio); dcbcfg->app[1].selector = I40E_APP_SEL_TCPIP; dcbcfg->app[1].protocolid = I40E_APP_PROTOID_ISCSI; /* FIP APP */ dcbcfg->app[2].priority = - (app_prio & I40E_AQC_CEE_APP_FIP_MASK) >> - I40E_AQC_CEE_APP_FIP_SHIFT; + FIELD_GET(I40E_AQC_CEE_APP_FIP_MASK, app_prio); dcbcfg->app[2].selector = I40E_APP_SEL_ETHTYPE; dcbcfg->app[2].protocolid = I40E_APP_PROTOID_FIP; } @@ -675,7 +645,7 @@ static void i40e_cee_to_dcb_config( { u32 status, tlv_status = le32_to_cpu(cee_cfg->tlv_status); u16 app_prio = le16_to_cpu(cee_cfg->oper_app_prio); - u8 i, tc, err, sync, oper; + u8 i, err, sync, oper; /* CEE PG data to ETS config */ dcbcfg->etscfg.maxtcs = cee_cfg->oper_num_tc; @@ -684,13 +654,13 @@ static void i40e_cee_to_dcb_config( * from those in the CEE Priority Group sub-TLV. */ for (i = 0; i < 4; i++) { - tc = (u8)((cee_cfg->oper_prio_tc[i] & - I40E_CEE_PGID_PRIO_0_MASK) >> - I40E_CEE_PGID_PRIO_0_SHIFT); - dcbcfg->etscfg.prioritytable[i * 2] = tc; - tc = (u8)((cee_cfg->oper_prio_tc[i] & - I40E_CEE_PGID_PRIO_1_MASK) >> - I40E_CEE_PGID_PRIO_1_SHIFT); + u8 tc; + + tc = FIELD_GET(I40E_CEE_PGID_PRIO_0_MASK, + cee_cfg->oper_prio_tc[i]); + dcbcfg->etscfg.prioritytable[i * 2] = tc; + tc = FIELD_GET(I40E_CEE_PGID_PRIO_1_MASK, + cee_cfg->oper_prio_tc[i]); dcbcfg->etscfg.prioritytable[i * 2 + 1] = tc; } @@ -713,8 +683,7 @@ static void i40e_cee_to_dcb_config( dcbcfg->pfc.pfccap = I40E_MAX_TRAFFIC_CLASS; i = 0; - status = (tlv_status & I40E_AQC_CEE_FCOE_STATUS_MASK) >> - I40E_AQC_CEE_FCOE_STATUS_SHIFT; + status = FIELD_GET(I40E_AQC_CEE_FCOE_STATUS_MASK, tlv_status); err = (status & I40E_TLV_STATUS_ERR) ? 1 : 0; sync = (status & I40E_TLV_STATUS_SYNC) ? 1 : 0; oper = (status & I40E_TLV_STATUS_OPER) ? 1 : 0; @@ -722,15 +691,13 @@ static void i40e_cee_to_dcb_config( if (!err && sync && oper) { /* FCoE APP */ dcbcfg->app[i].priority = - (app_prio & I40E_AQC_CEE_APP_FCOE_MASK) >> - I40E_AQC_CEE_APP_FCOE_SHIFT; + FIELD_GET(I40E_AQC_CEE_APP_FCOE_MASK, app_prio); dcbcfg->app[i].selector = I40E_APP_SEL_ETHTYPE; dcbcfg->app[i].protocolid = I40E_APP_PROTOID_FCOE; i++; } - status = (tlv_status & I40E_AQC_CEE_ISCSI_STATUS_MASK) >> - I40E_AQC_CEE_ISCSI_STATUS_SHIFT; + status = FIELD_GET(I40E_AQC_CEE_ISCSI_STATUS_MASK, tlv_status); err = (status & I40E_TLV_STATUS_ERR) ? 1 : 0; sync = (status & I40E_TLV_STATUS_SYNC) ? 1 : 0; oper = (status & I40E_TLV_STATUS_OPER) ? 1 : 0; @@ -738,15 +705,13 @@ static void i40e_cee_to_dcb_config( if (!err && sync && oper) { /* iSCSI APP */ dcbcfg->app[i].priority = - (app_prio & I40E_AQC_CEE_APP_ISCSI_MASK) >> - I40E_AQC_CEE_APP_ISCSI_SHIFT; + FIELD_GET(I40E_AQC_CEE_APP_ISCSI_MASK, app_prio); dcbcfg->app[i].selector = I40E_APP_SEL_TCPIP; dcbcfg->app[i].protocolid = I40E_APP_PROTOID_ISCSI; i++; } - status = (tlv_status & I40E_AQC_CEE_FIP_STATUS_MASK) >> - I40E_AQC_CEE_FIP_STATUS_SHIFT; + status = FIELD_GET(I40E_AQC_CEE_FIP_STATUS_MASK, tlv_status); err = (status & I40E_TLV_STATUS_ERR) ? 1 : 0; sync = (status & I40E_TLV_STATUS_SYNC) ? 1 : 0; oper = (status & I40E_TLV_STATUS_OPER) ? 1 : 0; @@ -754,8 +719,7 @@ static void i40e_cee_to_dcb_config( if (!err && sync && oper) { /* FIP APP */ dcbcfg->app[i].priority = - (app_prio & I40E_AQC_CEE_APP_FIP_MASK) >> - I40E_AQC_CEE_APP_FIP_SHIFT; + FIELD_GET(I40E_AQC_CEE_APP_FIP_MASK, app_prio); dcbcfg->app[i].selector = I40E_APP_SEL_ETHTYPE; dcbcfg->app[i].protocolid = I40E_APP_PROTOID_FIP; i++; @@ -1188,7 +1152,7 @@ static void i40e_add_ieee_app_pri_tlv(struct i40e_lldp_org_tlv *tlv, selector = dcbcfg->app[i].selector & 0x7; buf[offset] = (priority << I40E_IEEE_APP_PRIO_SHIFT) | selector; buf[offset + 1] = (dcbcfg->app[i].protocolid >> 0x8) & 0xFF; - buf[offset + 2] = dcbcfg->app[i].protocolid & 0xFF; + buf[offset + 2] = dcbcfg->app[i].protocolid & 0xFF; /* Move to next app */ offset += 3; i++; @@ -1284,8 +1248,7 @@ int i40e_dcb_config_to_lldp(u8 *lldpmib, u16 *miblen, do { i40e_add_dcb_tlv(tlv, dcbcfg, tlvid++); typelength = ntohs(tlv->typelength); - length = (u16)((typelength & I40E_LLDP_TLV_LEN_MASK) >> - I40E_LLDP_TLV_LEN_SHIFT); + length = FIELD_GET(I40E_LLDP_TLV_LEN_MASK, typelength); if (length) offset += length + I40E_IEEE_TLV_HEADER_LENGTH; /* END TLV or beyond LLDPDU size */ @@ -1537,8 +1500,7 @@ u8 i40e_dcb_hw_get_num_tc(struct i40e_hw *hw) { u32 reg = rd32(hw, I40E_PRTDCB_GENC); - return (u8)((reg & I40E_PRTDCB_GENC_NUMTC_MASK) >> - I40E_PRTDCB_GENC_NUMTC_SHIFT); + return FIELD_GET(I40E_PRTDCB_GENC_NUMTC_MASK, reg); } /** diff --git a/drivers/net/ethernet/intel/i40e/i40e_dcb_nl.c b/drivers/net/ethernet/intel/i40e/i40e_dcb_nl.c index 4721845fda6e..b96a92187ab3 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_dcb_nl.c +++ b/drivers/net/ethernet/intel/i40e/i40e_dcb_nl.c @@ -21,8 +21,7 @@ static void i40e_get_pfc_delay(struct i40e_hw *hw, u16 *delay) u32 val; val = rd32(hw, I40E_PRTDCB_GENC); - *delay = (u16)((val & I40E_PRTDCB_GENC_PFCLDA_MASK) >> - I40E_PRTDCB_GENC_PFCLDA_SHIFT); + *delay = FIELD_GET(I40E_PRTDCB_GENC_PFCLDA_MASK, val); } /** diff --git a/drivers/net/ethernet/intel/i40e/i40e_ddp.c b/drivers/net/ethernet/intel/i40e/i40e_ddp.c index cf25bfc5dc3f..2f53f0f53bc3 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_ddp.c +++ b/drivers/net/ethernet/intel/i40e/i40e_ddp.c @@ -81,8 +81,8 @@ static int i40e_ddp_does_profile_exist(struct i40e_hw *hw, static bool i40e_ddp_profiles_overlap(struct i40e_profile_info *new, struct i40e_profile_info *old) { - unsigned int group_id_old = (u8)((old->track_id & 0x00FF0000) >> 16); - unsigned int group_id_new = (u8)((new->track_id & 0x00FF0000) >> 16); + unsigned int group_id_old = FIELD_GET(0x00FF0000, old->track_id); + unsigned int group_id_new = FIELD_GET(0x00FF0000, new->track_id); /* 0x00 group must be only the first */ if (group_id_new == 0) diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c index afbe921f6d20..585b599bedeb 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c +++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c @@ -1959,9 +1959,8 @@ int i40e_get_eeprom_len(struct net_device *netdev) i40e_trace(ioctl_get_eeprom_len, np->vsi->back, val); return val; } - val = (rd32(hw, I40E_GLPCI_LBARCTRL) - & I40E_GLPCI_LBARCTRL_FL_SIZE_MASK) - >> I40E_GLPCI_LBARCTRL_FL_SIZE_SHIFT; + val = FIELD_GET(I40E_GLPCI_LBARCTRL_FL_SIZE_MASK, + rd32(hw, I40E_GLPCI_LBARCTRL)); /* register returns value in power of 2, 64Kbyte chunks. */ val = (64 * 1024) * BIT(val); i40e_trace(ioctl_get_eeprom_len, np->vsi->back, val); @@ -3300,7 +3299,7 @@ static int i40e_parse_rx_flow_user_data(struct ethtool_rx_flow_spec *fsp, } else if (valid) { data->flex_word = value & I40E_USERDEF_FLEX_WORD; data->flex_offset = - (value & I40E_USERDEF_FLEX_OFFSET) >> 16; + FIELD_GET(I40E_USERDEF_FLEX_OFFSET, value); data->flex_filter = true; } diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c index 0dfe472747c6..903a1e66697d 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_main.c +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c @@ -1197,11 +1197,9 @@ static void i40e_update_pf_stats(struct i40e_pf *pf) val = rd32(hw, I40E_PRTPM_EEE_STAT); nsd->tx_lpi_status = - (val & I40E_PRTPM_EEE_STAT_TX_LPI_STATUS_MASK) >> - I40E_PRTPM_EEE_STAT_TX_LPI_STATUS_SHIFT; + FIELD_GET(I40E_PRTPM_EEE_STAT_TX_LPI_STATUS_MASK, val); nsd->rx_lpi_status = - (val & I40E_PRTPM_EEE_STAT_RX_LPI_STATUS_MASK) >> - I40E_PRTPM_EEE_STAT_RX_LPI_STATUS_SHIFT; + FIELD_GET(I40E_PRTPM_EEE_STAT_RX_LPI_STATUS_MASK, val); i40e_stat_update32(hw, I40E_PRTPM_TLPIC, pf->stat_offsets_loaded, &osd->tx_lpi_count, &nsd->tx_lpi_count); @@ -4340,8 +4338,7 @@ static irqreturn_t i40e_intr(int irq, void *data) set_bit(__I40E_RESET_INTR_RECEIVED, pf->state); ena_mask &= ~I40E_PFINT_ICR0_ENA_GRST_MASK; val = rd32(hw, I40E_GLGEN_RSTAT); - val = (val & I40E_GLGEN_RSTAT_RESET_TYPE_MASK) - >> I40E_GLGEN_RSTAT_RESET_TYPE_SHIFT; + val = FIELD_GET(I40E_GLGEN_RSTAT_RESET_TYPE_MASK, val); if (val == I40E_RESET_CORER) { pf->corer_count++; i40e_trace(state_reset_corer, pf, pf->corer_count); @@ -5010,8 +5007,8 @@ static void i40e_vsi_free_irq(struct i40e_vsi *vsi) * next_q field of the registers. */ val = rd32(hw, I40E_PFINT_LNKLSTN(vector - 1)); - qp = (val & I40E_PFINT_LNKLSTN_FIRSTQ_INDX_MASK) - >> I40E_PFINT_LNKLSTN_FIRSTQ_INDX_SHIFT; + qp = FIELD_GET(I40E_PFINT_LNKLSTN_FIRSTQ_INDX_MASK, + val); val |= I40E_QUEUE_END_OF_LIST << I40E_PFINT_LNKLSTN_FIRSTQ_INDX_SHIFT; wr32(hw, I40E_PFINT_LNKLSTN(vector - 1), val); @@ -5033,8 +5030,8 @@ static void i40e_vsi_free_irq(struct i40e_vsi *vsi) val = rd32(hw, I40E_QINT_TQCTL(qp)); - next = (val & I40E_QINT_TQCTL_NEXTQ_INDX_MASK) - >> I40E_QINT_TQCTL_NEXTQ_INDX_SHIFT; + next = FIELD_GET(I40E_QINT_TQCTL_NEXTQ_INDX_MASK, + val); val &= ~(I40E_QINT_TQCTL_MSIX_INDX_MASK | I40E_QINT_TQCTL_MSIX0_INDX_MASK | @@ -5052,8 +5049,7 @@ static void i40e_vsi_free_irq(struct i40e_vsi *vsi) free_irq(pf->pdev->irq, pf); val = rd32(hw, I40E_PFINT_LNKLST0); - qp = (val & I40E_PFINT_LNKLSTN_FIRSTQ_INDX_MASK) - >> I40E_PFINT_LNKLSTN_FIRSTQ_INDX_SHIFT; + qp = FIELD_GET(I40E_PFINT_LNKLSTN_FIRSTQ_INDX_MASK, val); val |= I40E_QUEUE_END_OF_LIST << I40E_PFINT_LNKLST0_FIRSTQ_INDX_SHIFT; wr32(hw, I40E_PFINT_LNKLST0, val); @@ -9556,18 +9552,18 @@ static void i40e_handle_lan_overflow_event(struct i40e_pf *pf, dev_dbg(&pf->pdev->dev, "overflow Rx Queue Number = %d QTX_CTL=0x%08x\n", queue, qtx_ctl); + if (FIELD_GET(I40E_QTX_CTL_PFVF_Q_MASK, qtx_ctl) != + I40E_QTX_CTL_VF_QUEUE) + return; + /* Queue belongs to VF, find the VF and issue VF reset */ - if (((qtx_ctl & I40E_QTX_CTL_PFVF_Q_MASK) - >> I40E_QTX_CTL_PFVF_Q_SHIFT) == I40E_QTX_CTL_VF_QUEUE) { - vf_id = (u16)((qtx_ctl & I40E_QTX_CTL_VFVM_INDX_MASK) - >> I40E_QTX_CTL_VFVM_INDX_SHIFT); - vf_id -= hw->func_caps.vf_base_id; - vf = &pf->vf[vf_id]; - i40e_vc_notify_vf_reset(vf); - /* Allow VF to process pending reset notification */ - msleep(20); - i40e_reset_vf(vf, false); - } + vf_id = FIELD_GET(I40E_QTX_CTL_VFVM_INDX_MASK, qtx_ctl); + vf_id -= hw->func_caps.vf_base_id; + vf = &pf->vf[vf_id]; + i40e_vc_notify_vf_reset(vf); + /* Allow VF to process pending reset notification */ + msleep(20); + i40e_reset_vf(vf, false); } /** @@ -9593,8 +9589,7 @@ u32 i40e_get_current_fd_count(struct i40e_pf *pf) val = rd32(&pf->hw, I40E_PFQF_FDSTAT); fcnt_prog = (val & I40E_PFQF_FDSTAT_GUARANT_CNT_MASK) + - ((val & I40E_PFQF_FDSTAT_BEST_CNT_MASK) >> - I40E_PFQF_FDSTAT_BEST_CNT_SHIFT); + FIELD_GET(I40E_PFQF_FDSTAT_BEST_CNT_MASK, val); return fcnt_prog; } @@ -9608,8 +9603,7 @@ u32 i40e_get_global_fd_count(struct i40e_pf *pf) val = rd32(&pf->hw, I40E_GLQF_FDCNT_0); fcnt_prog = (val & I40E_GLQF_FDCNT_0_GUARANT_CNT_MASK) + - ((val & I40E_GLQF_FDCNT_0_BESTCNT_MASK) >> - I40E_GLQF_FDCNT_0_BESTCNT_SHIFT); + FIELD_GET(I40E_GLQF_FDCNT_0_BESTCNT_MASK, val); return fcnt_prog; } @@ -11200,14 +11194,10 @@ static void i40e_handle_mdd_event(struct i40e_pf *pf) /* find what triggered the MDD event */ reg = rd32(hw, I40E_GL_MDET_TX); if (reg & I40E_GL_MDET_TX_VALID_MASK) { - u8 pf_num = (reg & I40E_GL_MDET_TX_PF_NUM_MASK) >> - I40E_GL_MDET_TX_PF_NUM_SHIFT; - u16 vf_num = (reg & I40E_GL_MDET_TX_VF_NUM_MASK) >> - I40E_GL_MDET_TX_VF_NUM_SHIFT; - u8 event = (reg & I40E_GL_MDET_TX_EVENT_MASK) >> - I40E_GL_MDET_TX_EVENT_SHIFT; - u16 queue = ((reg & I40E_GL_MDET_TX_QUEUE_MASK) >> - I40E_GL_MDET_TX_QUEUE_SHIFT) - + u8 pf_num = FIELD_GET(I40E_GL_MDET_TX_PF_NUM_MASK, reg); + u16 vf_num = FIELD_GET(I40E_GL_MDET_TX_VF_NUM_MASK, reg); + u8 event = FIELD_GET(I40E_GL_MDET_TX_EVENT_MASK, reg); + u16 queue = FIELD_GET(I40E_GL_MDET_TX_QUEUE_MASK, reg) - pf->hw.func_caps.base_queue; if (netif_msg_tx_err(pf)) dev_info(&pf->pdev->dev, "Malicious Driver Detection event 0x%02x on TX queue %d PF number 0x%02x VF number 0x%02x\n", @@ -11217,12 +11207,9 @@ static void i40e_handle_mdd_event(struct i40e_pf *pf) } reg = rd32(hw, I40E_GL_MDET_RX); if (reg & I40E_GL_MDET_RX_VALID_MASK) { - u8 func = (reg & I40E_GL_MDET_RX_FUNCTION_MASK) >> - I40E_GL_MDET_RX_FUNCTION_SHIFT; - u8 event = (reg & I40E_GL_MDET_RX_EVENT_MASK) >> - I40E_GL_MDET_RX_EVENT_SHIFT; - u16 queue = ((reg & I40E_GL_MDET_RX_QUEUE_MASK) >> - I40E_GL_MDET_RX_QUEUE_SHIFT) - + u8 func = FIELD_GET(I40E_GL_MDET_RX_FUNCTION_MASK, reg); + u8 event = FIELD_GET(I40E_GL_MDET_RX_EVENT_MASK, reg); + u16 queue = FIELD_GET(I40E_GL_MDET_RX_QUEUE_MASK, reg) - pf->hw.func_caps.base_queue; if (netif_msg_rx_err(pf)) dev_info(&pf->pdev->dev, "Malicious Driver Detection event 0x%02x on RX queue %d of function 0x%02x\n", @@ -16187,8 +16174,8 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent) /* make sure the MFS hasn't been set lower than the default */ #define MAX_FRAME_SIZE_DEFAULT 0x2600 - val = (rd32(&pf->hw, I40E_PRTGL_SAH) & - I40E_PRTGL_SAH_MFS_MASK) >> I40E_PRTGL_SAH_MFS_SHIFT; + val = FIELD_GET(I40E_PRTGL_SAH_MFS_MASK, + rd32(&pf->hw, I40E_PRTGL_SAH)); if (val < MAX_FRAME_SIZE_DEFAULT) dev_warn(&pdev->dev, "MFS for port %x has been set below the default: %x\n", pf->hw.port, val); diff --git a/drivers/net/ethernet/intel/i40e/i40e_nvm.c b/drivers/net/ethernet/intel/i40e/i40e_nvm.c index 70215ae92b0c..987534b0285b 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_nvm.c +++ b/drivers/net/ethernet/intel/i40e/i40e_nvm.c @@ -27,8 +27,7 @@ int i40e_init_nvm(struct i40e_hw *hw) * as the blank mode may be used in the factory line. */ gens = rd32(hw, I40E_GLNVM_GENS); - sr_size = ((gens & I40E_GLNVM_GENS_SR_SIZE_MASK) >> - I40E_GLNVM_GENS_SR_SIZE_SHIFT); + sr_size = FIELD_GET(I40E_GLNVM_GENS_SR_SIZE_MASK, gens); /* Switching to words (sr_size contains power of 2KB) */ nvm->sr_size = BIT(sr_size) * I40E_SR_WORDS_IN_1KB; @@ -194,9 +193,8 @@ static int i40e_read_nvm_word_srctl(struct i40e_hw *hw, u16 offset, ret_code = i40e_poll_sr_srctl_done_bit(hw); if (!ret_code) { sr_reg = rd32(hw, I40E_GLNVM_SRDATA); - *data = (u16)((sr_reg & - I40E_GLNVM_SRDATA_RDDATA_MASK) - >> I40E_GLNVM_SRDATA_RDDATA_SHIFT); + *data = FIELD_GET(I40E_GLNVM_SRDATA_RDDATA_MASK, + sr_reg); } } if (ret_code) @@ -772,13 +770,12 @@ static inline u8 i40e_nvmupd_get_module(u32 val) } static inline u8 i40e_nvmupd_get_transaction(u32 val) { - return (u8)((val & I40E_NVM_TRANS_MASK) >> I40E_NVM_TRANS_SHIFT); + return FIELD_GET(I40E_NVM_TRANS_MASK, val); } static inline u8 i40e_nvmupd_get_preservation_flags(u32 val) { - return (u8)((val & I40E_NVM_PRESERVATION_FLAGS_MASK) >> - I40E_NVM_PRESERVATION_FLAGS_SHIFT); + return FIELD_GET(I40E_NVM_PRESERVATION_FLAGS_MASK, val); } static const char * const i40e_nvm_update_state_str[] = { diff --git a/drivers/net/ethernet/intel/i40e/i40e_ptp.c b/drivers/net/ethernet/intel/i40e/i40e_ptp.c index 1cf993a79438..e7ebcb09f23c 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_ptp.c +++ b/drivers/net/ethernet/intel/i40e/i40e_ptp.c @@ -1480,8 +1480,8 @@ void i40e_ptp_init(struct i40e_pf *pf) /* Only one PF is assigned to control 1588 logic per port. Do not * enable any support for PFs not assigned via PRTTSYN_CTL0.PF_ID */ - pf_id = (rd32(hw, I40E_PRTTSYN_CTL0) & I40E_PRTTSYN_CTL0_PF_ID_MASK) >> - I40E_PRTTSYN_CTL0_PF_ID_SHIFT; + pf_id = FIELD_GET(I40E_PRTTSYN_CTL0_PF_ID_MASK, + rd32(hw, I40E_PRTTSYN_CTL0)); if (hw->pf_id != pf_id) { clear_bit(I40E_FLAG_PTP_ENA, pf->flags); dev_info(&pf->pdev->dev, "%s: PTP not supported on %s\n", diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c index b0df3dde1386..971ba3322038 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c @@ -686,8 +686,7 @@ static void i40e_fd_handle_status(struct i40e_ring *rx_ring, u64 qword0_raw, u32 error; qw0 = (struct i40e_16b_rx_wb_qw0 *)&qword0_raw; - error = (qword1 & I40E_RX_PROG_STATUS_DESC_QW1_ERROR_MASK) >> - I40E_RX_PROG_STATUS_DESC_QW1_ERROR_SHIFT; + error = FIELD_GET(I40E_RX_PROG_STATUS_DESC_QW1_ERROR_MASK, qword1); if (error == BIT(I40E_RX_PROG_STATUS_DESC_FD_TBL_FULL_SHIFT)) { pf->fd_inv = le32_to_cpu(qw0->hi_dword.fd_id); @@ -1398,8 +1397,7 @@ void i40e_clean_programming_status(struct i40e_ring *rx_ring, u64 qword0_raw, { u8 id; - id = (qword1 & I40E_RX_PROG_STATUS_DESC_QW1_PROGID_MASK) >> - I40E_RX_PROG_STATUS_DESC_QW1_PROGID_SHIFT; + id = FIELD_GET(I40E_RX_PROG_STATUS_DESC_QW1_PROGID_MASK, qword1); if (id == I40E_RX_PROG_STATUS_DESC_FD_FILTER_STATUS) i40e_fd_handle_status(rx_ring, qword0_raw, qword1, id); @@ -1759,11 +1757,9 @@ static inline void i40e_rx_checksum(struct i40e_vsi *vsi, u64 qword; qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len); - ptype = (qword & I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT; - rx_error = (qword & I40E_RXD_QW1_ERROR_MASK) >> - I40E_RXD_QW1_ERROR_SHIFT; - rx_status = (qword & I40E_RXD_QW1_STATUS_MASK) >> - I40E_RXD_QW1_STATUS_SHIFT; + ptype = FIELD_GET(I40E_RXD_QW1_PTYPE_MASK, qword); + rx_error = FIELD_GET(I40E_RXD_QW1_ERROR_MASK, qword); + rx_status = FIELD_GET(I40E_RXD_QW1_STATUS_MASK, qword); decoded = decode_rx_desc_ptype(ptype); skb->ip_summed = CHECKSUM_NONE; @@ -1896,13 +1892,10 @@ void i40e_process_skb_fields(struct i40e_ring *rx_ring, union i40e_rx_desc *rx_desc, struct sk_buff *skb) { u64 qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len); - u32 rx_status = (qword & I40E_RXD_QW1_STATUS_MASK) >> - I40E_RXD_QW1_STATUS_SHIFT; + u32 rx_status = FIELD_GET(I40E_RXD_QW1_STATUS_MASK, qword); u32 tsynvalid = rx_status & I40E_RXD_QW1_STATUS_TSYNVALID_MASK; - u32 tsyn = (rx_status & I40E_RXD_QW1_STATUS_TSYNINDX_MASK) >> - I40E_RXD_QW1_STATUS_TSYNINDX_SHIFT; - u8 rx_ptype = (qword & I40E_RXD_QW1_PTYPE_MASK) >> - I40E_RXD_QW1_PTYPE_SHIFT; + u32 tsyn = FIELD_GET(I40E_RXD_QW1_STATUS_TSYNINDX_MASK, rx_status); + u8 rx_ptype = FIELD_GET(I40E_RXD_QW1_PTYPE_MASK, qword); if (unlikely(tsynvalid)) i40e_ptp_rx_hwtstamp(rx_ring->vsi->back, skb, tsyn); @@ -2549,8 +2542,7 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget, continue; } - size = (qword & I40E_RXD_QW1_LENGTH_PBUF_MASK) >> - I40E_RXD_QW1_LENGTH_PBUF_SHIFT; + size = FIELD_GET(I40E_RXD_QW1_LENGTH_PBUF_MASK, qword); if (!size) break; @@ -3594,8 +3586,7 @@ static inline int i40e_tx_map(struct i40e_ring *tx_ring, struct sk_buff *skb, if (tx_flags & I40E_TX_FLAGS_HW_VLAN) { td_cmd |= I40E_TX_DESC_CMD_IL2TAG1; - td_tag = (tx_flags & I40E_TX_FLAGS_VLAN_MASK) >> - I40E_TX_FLAGS_VLAN_SHIFT; + td_tag = FIELD_GET(I40E_TX_FLAGS_VLAN_MASK, tx_flags); } first->tx_flags = tx_flags; diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c index 5a45c53e6770..0de8e00ad291 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c @@ -474,10 +474,10 @@ static void i40e_release_rdma_qvlist(struct i40e_vf *vf) */ reg_idx = (msix_vf - 1) * vf->vf_id + qv_info->ceq_idx; reg = rd32(hw, I40E_VPINT_CEQCTL(reg_idx)); - next_q_index = (reg & I40E_VPINT_CEQCTL_NEXTQ_INDX_MASK) - >> I40E_VPINT_CEQCTL_NEXTQ_INDX_SHIFT; - next_q_type = (reg & I40E_VPINT_CEQCTL_NEXTQ_TYPE_MASK) - >> I40E_VPINT_CEQCTL_NEXTQ_TYPE_SHIFT; + next_q_index = FIELD_GET(I40E_VPINT_CEQCTL_NEXTQ_INDX_MASK, + reg); + next_q_type = FIELD_GET(I40E_VPINT_CEQCTL_NEXTQ_TYPE_MASK, + reg); reg_idx = ((msix_vf - 1) * vf->vf_id) + (v_idx - 1); reg = (next_q_index & @@ -555,10 +555,10 @@ i40e_config_rdma_qvlist(struct i40e_vf *vf, * queue on top. Also link it with the new queue in CEQCTL. */ reg = rd32(hw, I40E_VPINT_LNKLSTN(reg_idx)); - next_q_idx = ((reg & I40E_VPINT_LNKLSTN_FIRSTQ_INDX_MASK) >> - I40E_VPINT_LNKLSTN_FIRSTQ_INDX_SHIFT); - next_q_type = ((reg & I40E_VPINT_LNKLSTN_FIRSTQ_TYPE_MASK) >> - I40E_VPINT_LNKLSTN_FIRSTQ_TYPE_SHIFT); + next_q_idx = FIELD_GET(I40E_VPINT_LNKLSTN_FIRSTQ_INDX_MASK, + reg); + next_q_type = FIELD_GET(I40E_VPINT_LNKLSTN_FIRSTQ_TYPE_MASK, + reg); if (qv_info->ceq_idx != I40E_QUEUE_INVALID_IDX) { reg_idx = (msix_vf - 1) * vf->vf_id + qv_info->ceq_idx; @@ -4673,9 +4673,8 @@ int i40e_ndo_get_vf_config(struct net_device *netdev, ivi->max_tx_rate = vf->tx_rate; ivi->min_tx_rate = 0; - ivi->vlan = le16_to_cpu(vsi->info.pvid) & I40E_VLAN_MASK; - ivi->qos = (le16_to_cpu(vsi->info.pvid) & I40E_PRIORITY_MASK) >> - I40E_VLAN_PRIORITY_SHIFT; + ivi->vlan = le16_get_bits(vsi->info.pvid, I40E_VLAN_MASK); + ivi->qos = le16_get_bits(vsi->info.pvid, I40E_PRIORITY_MASK); if (vf->link_forced == false) ivi->linkstate = IFLA_VF_LINK_STATE_AUTO; else if (vf->link_up == true) diff --git a/drivers/net/ethernet/intel/i40e/i40e_xsk.c b/drivers/net/ethernet/intel/i40e/i40e_xsk.c index e99fa854d17f..af7d5fa6cdc1 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_xsk.c +++ b/drivers/net/ethernet/intel/i40e/i40e_xsk.c @@ -476,8 +476,7 @@ int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int budget) continue; } - size = (qword & I40E_RXD_QW1_LENGTH_PBUF_MASK) >> - I40E_RXD_QW1_LENGTH_PBUF_SHIFT; + size = FIELD_GET(I40E_RXD_QW1_LENGTH_PBUF_MASK, qword); if (!size) break; From patchwork Wed Dec 6 01:01:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jesse Brandeburg X-Patchwork-Id: 13480921 X-Patchwork-Delegate: kuba@kernel.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="KtuAkVyS" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 701B61BC for ; Tue, 5 Dec 2023 17:01:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1701824501; x=1733360501; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=r8lWu9gNBkUsZSd6it0SarezI/ivZlZxd38l3b9PCII=; b=KtuAkVySH1Xv0nq1BRYoW5jYxF9KosxQrf/ihY1V/ciYScOx5KMEFl6n r9HFaMaiUXEdPAbyI4j1t+Kqxi6OVNom2vrDieqcB2CjJpIZz8QTFqFNH SZ5pVEy0DqcTEv1Rvk7Edc3L9lym1mJvwRHAnfWA9+aUXTImQVnoxkmML YsgYZOHkZKZ+ae+C0vb3gyPynTJLSXtnIhaF7D6q1dl8ur+OQ+BKI6g1C d3f6v+OOpeMx7I26G2xklVwps9rumEYld6RwEKrh9C5ba242i1TzZcDq0 +SZqYAO8sSmAaWxGjvw8e5gtTVe6O0o3QW8qZBVTusIbsmrxWFtKH/5RT Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10915"; a="12700323" X-IronPort-AV: E=Sophos;i="6.04,254,1695711600"; d="scan'208";a="12700323" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Dec 2023 17:01:36 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10915"; a="841655283" X-IronPort-AV: E=Sophos;i="6.04,254,1695711600"; d="scan'208";a="841655283" Received: from jbrandeb-spr1.jf.intel.com ([10.166.28.233]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Dec 2023 17:01:35 -0800 From: Jesse Brandeburg To: intel-wired-lan@lists.osuosl.org Cc: Jesse Brandeburg , netdev@vger.kernel.org, aleksander.lobakin@intel.com, przemyslaw.kitszel@intel.com, horms@kernel.org, marcin.szycik@linux.intel.com, Julia Lawall Subject: [PATCH iwl-next v2 12/15] iavf: field get conversion Date: Tue, 5 Dec 2023 17:01:11 -0800 Message-Id: <20231206010114.2259388-13-jesse.brandeburg@intel.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20231206010114.2259388-1-jesse.brandeburg@intel.com> References: <20231206010114.2259388-1-jesse.brandeburg@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Refactor the iavf driver to use FIELD_GET() for mask and shift reads, which reduces lines of code and adds clarity of intent. This code was generated by the following coccinelle/spatch script and then manually repaired in a later patch. @get@ constant shift,mask; type T; expression a; @@ -((T)((a) & mask) >> shift) +FIELD_GET(mask, a) and applied via: spatch --sp-file field_prep.cocci --in-place --dir \ drivers/net/ethernet/intel/ Cc: Julia Lawall Reviewed-by: Marcin Szycik Reviewed-by: Simon Horman Signed-off-by: Jesse Brandeburg Tested-by: Rafal Romanowski --- .../net/ethernet/intel/iavf/iavf_ethtool.c | 3 +-- drivers/net/ethernet/intel/iavf/iavf_txrx.c | 20 +++++++------------ 2 files changed, 8 insertions(+), 15 deletions(-) diff --git a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c index 11150bdc63d0..90d8f1fcc3aa 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c +++ b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c @@ -1020,8 +1020,7 @@ iavf_parse_rx_flow_user_data(struct ethtool_rx_flow_spec *fsp, #define IAVF_USERDEF_FLEX_MAX_OFFS_VAL 504 flex = &fltr->flex_words[cnt++]; flex->word = value & IAVF_USERDEF_FLEX_WORD_M; - flex->offset = (value & IAVF_USERDEF_FLEX_OFFS_M) >> - IAVF_USERDEF_FLEX_OFFS_S; + flex->offset = FIELD_GET(IAVF_USERDEF_FLEX_OFFS_M, value); if (flex->offset > IAVF_USERDEF_FLEX_MAX_OFFS_VAL) return -EINVAL; } diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c index fb7edba9c2f8..b71484c87a84 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c @@ -989,11 +989,9 @@ static void iavf_rx_checksum(struct iavf_vsi *vsi, u64 qword; qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len); - ptype = (qword & IAVF_RXD_QW1_PTYPE_MASK) >> IAVF_RXD_QW1_PTYPE_SHIFT; - rx_error = (qword & IAVF_RXD_QW1_ERROR_MASK) >> - IAVF_RXD_QW1_ERROR_SHIFT; - rx_status = (qword & IAVF_RXD_QW1_STATUS_MASK) >> - IAVF_RXD_QW1_STATUS_SHIFT; + ptype = FIELD_GET(IAVF_RXD_QW1_PTYPE_MASK, qword); + rx_error = FIELD_GET(IAVF_RXD_QW1_ERROR_MASK, qword); + rx_status = FIELD_GET(IAVF_RXD_QW1_STATUS_MASK, qword); decoded = decode_rx_desc_ptype(ptype); skb->ip_summed = CHECKSUM_NONE; @@ -1534,8 +1532,7 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) if (!iavf_test_staterr(rx_desc, IAVF_RXD_DD)) break; - size = (qword & IAVF_RXD_QW1_LENGTH_PBUF_MASK) >> - IAVF_RXD_QW1_LENGTH_PBUF_SHIFT; + size = FIELD_GET(IAVF_RXD_QW1_LENGTH_PBUF_MASK, qword); iavf_trace(clean_rx_irq, rx_ring, rx_desc, skb); rx_buffer = iavf_get_rx_buffer(rx_ring, size); @@ -1582,8 +1579,7 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) total_rx_bytes += skb->len; qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len); - rx_ptype = (qword & IAVF_RXD_QW1_PTYPE_MASK) >> - IAVF_RXD_QW1_PTYPE_SHIFT; + rx_ptype = FIELD_GET(IAVF_RXD_QW1_PTYPE_MASK, qword); /* populate checksum, VLAN, and protocol */ iavf_process_skb_fields(rx_ring, rx_desc, skb, rx_ptype); @@ -2291,8 +2287,7 @@ static void iavf_tx_map(struct iavf_ring *tx_ring, struct sk_buff *skb, if (tx_flags & IAVF_TX_FLAGS_HW_VLAN) { td_cmd |= IAVF_TX_DESC_CMD_IL2TAG1; - td_tag = (tx_flags & IAVF_TX_FLAGS_VLAN_MASK) >> - IAVF_TX_FLAGS_VLAN_SHIFT; + td_tag = FIELD_GET(IAVF_TX_FLAGS_VLAN_MASK, tx_flags); } first->tx_flags = tx_flags; @@ -2468,8 +2463,7 @@ static netdev_tx_t iavf_xmit_frame_ring(struct sk_buff *skb, if (tx_flags & IAVF_TX_FLAGS_HW_OUTER_SINGLE_VLAN) { cd_type_cmd_tso_mss |= IAVF_TX_CTX_DESC_IL2TAG2 << IAVF_TXD_CTX_QW1_CMD_SHIFT; - cd_l2tag2 = (tx_flags & IAVF_TX_FLAGS_VLAN_MASK) >> - IAVF_TX_FLAGS_VLAN_SHIFT; + cd_l2tag2 = FIELD_GET(IAVF_TX_FLAGS_VLAN_MASK, tx_flags); } /* obtain protocol of skb */ From patchwork Wed Dec 6 01:01:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jesse Brandeburg X-Patchwork-Id: 13480922 X-Patchwork-Delegate: kuba@kernel.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="LrKn8xDT" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 503381BE for ; Tue, 5 Dec 2023 17:01:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1701824502; x=1733360502; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=8kS9pXYmMEvQ8opBaJHQNh6kC26yvGHuMX5cDcYwI4M=; b=LrKn8xDT6Yn6DrEbpteGRiTjcVYv2105k3W6ilxJ1704zKnOKXpK2LCG L9zS+3GFVkwcoD0PMpK8fwDy2yCzPpGzqda3Ltt3rv9CwQDUn+RQFiM2h otPpiGFv9BHUkc8teDJsQOf+9RdfU8EdH4VEsQ12ddWKDP6MYJynpoVQo ubeXbrzETZHFD+bIcotUyixj9Mjsu6H8DGQDIZj44efr5kRqOkynRnLYM DxFUg9+FgML9nyP8LDyucrseYJo/jHiH3VwUKFziQ+Kt0wwT5cksW0FYl Joch7WzkRA6ElIEZWeFoOZSzMKdTtV3O0OUqt2+aZIeJKgAIwyq0Pf7kk w==; X-IronPort-AV: E=McAfee;i="6600,9927,10915"; a="12700328" X-IronPort-AV: E=Sophos;i="6.04,254,1695711600"; d="scan'208";a="12700328" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Dec 2023 17:01:36 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10915"; a="841655289" X-IronPort-AV: E=Sophos;i="6.04,254,1695711600"; d="scan'208";a="841655289" Received: from jbrandeb-spr1.jf.intel.com ([10.166.28.233]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Dec 2023 17:01:35 -0800 From: Jesse Brandeburg To: intel-wired-lan@lists.osuosl.org Cc: Jesse Brandeburg , netdev@vger.kernel.org, aleksander.lobakin@intel.com, przemyslaw.kitszel@intel.com, horms@kernel.org, marcin.szycik@linux.intel.com, Julia Lawall Subject: [PATCH iwl-next v2 13/15] ice: field get conversion Date: Tue, 5 Dec 2023 17:01:12 -0800 Message-Id: <20231206010114.2259388-14-jesse.brandeburg@intel.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20231206010114.2259388-1-jesse.brandeburg@intel.com> References: <20231206010114.2259388-1-jesse.brandeburg@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Refactor the ice driver to use FIELD_GET() for mask and shift reads, which reduces lines of code and adds clarity of intent. This code was generated by the following coccinelle/spatch script and then manually repaired. @get@ constant shift,mask; type T; expression a; @@ -(((T)(a) & mask) >> shift) +FIELD_GET(mask, a) and applied via: spatch --sp-file field_prep.cocci --in-place --dir \ drivers/net/ethernet/intel/ CC: Alexander Lobakin Cc: Julia Lawall Reviewed-by: Marcin Szycik Reviewed-by: Simon Horman Signed-off-by: Jesse Brandeburg Tested-by: Pucha Himasekhar Reddy (A Contingent worker at Intel) --- v2: added a couple more get conversions --- drivers/net/ethernet/intel/ice/ice_base.c | 12 +-- drivers/net/ethernet/intel/ice/ice_common.c | 25 +++---- drivers/net/ethernet/intel/ice/ice_dcb.c | 74 ++++++++----------- drivers/net/ethernet/intel/ice/ice_dcb_nl.c | 2 +- .../net/ethernet/intel/ice/ice_ethtool_fdir.c | 3 +- drivers/net/ethernet/intel/ice/ice_lib.c | 5 +- drivers/net/ethernet/intel/ice/ice_main.c | 48 +++++------- drivers/net/ethernet/intel/ice/ice_nvm.c | 15 ++-- drivers/net/ethernet/intel/ice/ice_ptp.c | 4 +- drivers/net/ethernet/intel/ice/ice_sched.c | 3 +- drivers/net/ethernet/intel/ice/ice_sriov.c | 3 +- drivers/net/ethernet/intel/ice/ice_virtchnl.c | 2 +- .../ethernet/intel/ice/ice_virtchnl_fdir.c | 13 ++-- 13 files changed, 85 insertions(+), 124 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c index 3fd6e99dba23..26648b193ac9 100644 --- a/drivers/net/ethernet/intel/ice/ice_base.c +++ b/drivers/net/ethernet/intel/ice/ice_base.c @@ -224,14 +224,10 @@ static void ice_cfg_itr_gran(struct ice_hw *hw) /* no need to update global register if ITR gran is already set */ if (!(regval & GLINT_CTL_DIS_AUTOMASK_M) && - (((regval & GLINT_CTL_ITR_GRAN_200_M) >> - GLINT_CTL_ITR_GRAN_200_S) == ICE_ITR_GRAN_US) && - (((regval & GLINT_CTL_ITR_GRAN_100_M) >> - GLINT_CTL_ITR_GRAN_100_S) == ICE_ITR_GRAN_US) && - (((regval & GLINT_CTL_ITR_GRAN_50_M) >> - GLINT_CTL_ITR_GRAN_50_S) == ICE_ITR_GRAN_US) && - (((regval & GLINT_CTL_ITR_GRAN_25_M) >> - GLINT_CTL_ITR_GRAN_25_S) == ICE_ITR_GRAN_US)) + (FIELD_GET(GLINT_CTL_ITR_GRAN_200_M, regval) == ICE_ITR_GRAN_US) && + (FIELD_GET(GLINT_CTL_ITR_GRAN_100_M, regval) == ICE_ITR_GRAN_US) && + (FIELD_GET(GLINT_CTL_ITR_GRAN_50_M, regval) == ICE_ITR_GRAN_US) && + (FIELD_GET(GLINT_CTL_ITR_GRAN_25_M, regval) == ICE_ITR_GRAN_US)) return; regval = FIELD_PREP(GLINT_CTL_ITR_GRAN_200_M, ICE_ITR_GRAN_US) | diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c index eb5c00b83112..bb3fc82bd51f 100644 --- a/drivers/net/ethernet/intel/ice/ice_common.c +++ b/drivers/net/ethernet/intel/ice/ice_common.c @@ -960,8 +960,8 @@ static int ice_get_fw_log_cfg(struct ice_hw *hw) u16 v, m, flgs; v = le16_to_cpu(config[i]); - m = (v & ICE_AQC_FW_LOG_ID_M) >> ICE_AQC_FW_LOG_ID_S; - flgs = (v & ICE_AQC_FW_LOG_EN_M) >> ICE_AQC_FW_LOG_EN_S; + m = FIELD_GET(ICE_AQC_FW_LOG_ID_M, v); + flgs = FIELD_GET(ICE_AQC_FW_LOG_EN_M, v); if (m < ICE_AQC_FW_LOG_ID_MAX) hw->fw_log.evnts[m].cur = flgs; @@ -1116,7 +1116,7 @@ static int ice_cfg_fw_log(struct ice_hw *hw, bool enable) } v = le16_to_cpu(data[i]); - m = (v & ICE_AQC_FW_LOG_ID_M) >> ICE_AQC_FW_LOG_ID_S; + m = FIELD_GET(ICE_AQC_FW_LOG_ID_M, v); hw->fw_log.evnts[m].cur = hw->fw_log.evnts[m].cfg; } } @@ -1152,9 +1152,8 @@ void ice_output_fw_log(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf) */ static void ice_get_itr_intrl_gran(struct ice_hw *hw) { - u8 max_agg_bw = (rd32(hw, GL_PWR_MODE_CTL) & - GL_PWR_MODE_CTL_CAR_MAX_BW_M) >> - GL_PWR_MODE_CTL_CAR_MAX_BW_S; + u8 max_agg_bw = FIELD_GET(GL_PWR_MODE_CTL_CAR_MAX_BW_M, + rd32(hw, GL_PWR_MODE_CTL)); switch (max_agg_bw) { case ICE_MAX_AGG_BW_200G: @@ -1186,9 +1185,7 @@ int ice_init_hw(struct ice_hw *hw) if (status) return status; - hw->pf_id = (u8)(rd32(hw, PF_FUNC_RID) & - PF_FUNC_RID_FUNC_NUM_M) >> - PF_FUNC_RID_FUNC_NUM_S; + hw->pf_id = FIELD_GET(PF_FUNC_RID_FUNC_NUM_M, rd32(hw, PF_FUNC_RID)); status = ice_reset(hw, ICE_RESET_PFR); if (status) @@ -1374,8 +1371,8 @@ int ice_check_reset(struct ice_hw *hw) * or EMPR has occurred. The grst delay value is in 100ms units. * Add 1sec for outstanding AQ commands that can take a long time. */ - grst_timeout = ((rd32(hw, GLGEN_RSTCTL) & GLGEN_RSTCTL_GRSTDEL_M) >> - GLGEN_RSTCTL_GRSTDEL_S) + 10; + grst_timeout = FIELD_GET(GLGEN_RSTCTL_GRSTDEL_M, + rd32(hw, GLGEN_RSTCTL)) + 10; for (cnt = 0; cnt < grst_timeout; cnt++) { mdelay(100); @@ -2459,7 +2456,7 @@ ice_parse_1588_func_caps(struct ice_hw *hw, struct ice_hw_func_caps *func_p, info->tmr_index_owned = ((number & ICE_TS_TMR_IDX_OWND_M) != 0); info->tmr_index_assoc = ((number & ICE_TS_TMR_IDX_ASSOC_M) != 0); - info->clk_freq = (number & ICE_TS_CLK_FREQ_M) >> ICE_TS_CLK_FREQ_S; + info->clk_freq = FIELD_GET(ICE_TS_CLK_FREQ_M, number); info->clk_src = ((number & ICE_TS_CLK_SRC_M) != 0); if (info->clk_freq < NUM_ICE_TIME_REF_FREQ) { @@ -2660,7 +2657,7 @@ ice_parse_1588_dev_caps(struct ice_hw *hw, struct ice_hw_dev_caps *dev_p, info->tmr0_owned = ((number & ICE_TS_TMR0_OWND_M) != 0); info->tmr0_ena = ((number & ICE_TS_TMR0_ENA_M) != 0); - info->tmr1_owner = (number & ICE_TS_TMR1_OWNR_M) >> ICE_TS_TMR1_OWNR_S; + info->tmr1_owner = FIELD_GET(ICE_TS_TMR1_OWNR_M, number); info->tmr1_owned = ((number & ICE_TS_TMR1_OWND_M) != 0); info->tmr1_ena = ((number & ICE_TS_TMR1_ENA_M) != 0); @@ -5984,7 +5981,7 @@ ice_get_link_default_override(struct ice_link_default_override_tlv *ldo, ice_debug(hw, ICE_DBG_INIT, "Failed to read override link options.\n"); return status; } - ldo->options = buf & ICE_LINK_OVERRIDE_OPT_M; + ldo->options = FIELD_GET(ICE_LINK_OVERRIDE_OPT_M, buf); ldo->phy_config = (buf & ICE_LINK_OVERRIDE_PHY_CFG_M) >> ICE_LINK_OVERRIDE_PHY_CFG_S; diff --git a/drivers/net/ethernet/intel/ice/ice_dcb.c b/drivers/net/ethernet/intel/ice/ice_dcb.c index 41b7853291d3..7f3e00c187b4 100644 --- a/drivers/net/ethernet/intel/ice/ice_dcb.c +++ b/drivers/net/ethernet/intel/ice/ice_dcb.c @@ -146,8 +146,7 @@ static u8 ice_get_dcbx_status(struct ice_hw *hw) u32 reg; reg = rd32(hw, PRTDCB_GENS); - return (u8)((reg & PRTDCB_GENS_DCBX_STATUS_M) >> - PRTDCB_GENS_DCBX_STATUS_S); + return FIELD_GET(PRTDCB_GENS_DCBX_STATUS_M, reg); } /** @@ -173,11 +172,9 @@ ice_parse_ieee_ets_common_tlv(u8 *buf, struct ice_dcb_ets_cfg *ets_cfg) */ for (i = 0; i < 4; i++) { ets_cfg->prio_table[i * 2] = - ((buf[offset] & ICE_IEEE_ETS_PRIO_1_M) >> - ICE_IEEE_ETS_PRIO_1_S); + FIELD_GET(ICE_IEEE_ETS_PRIO_1_M, buf[offset]); ets_cfg->prio_table[i * 2 + 1] = - ((buf[offset] & ICE_IEEE_ETS_PRIO_0_M) >> - ICE_IEEE_ETS_PRIO_0_S); + FIELD_GET(ICE_IEEE_ETS_PRIO_0_M, buf[offset]); offset++; } @@ -221,11 +218,9 @@ ice_parse_ieee_etscfg_tlv(struct ice_lldp_org_tlv *tlv, * |1bit | 1bit|3 bits|3bits| */ etscfg = &dcbcfg->etscfg; - etscfg->willing = ((buf[0] & ICE_IEEE_ETS_WILLING_M) >> - ICE_IEEE_ETS_WILLING_S); - etscfg->cbs = ((buf[0] & ICE_IEEE_ETS_CBS_M) >> ICE_IEEE_ETS_CBS_S); - etscfg->maxtcs = ((buf[0] & ICE_IEEE_ETS_MAXTC_M) >> - ICE_IEEE_ETS_MAXTC_S); + etscfg->willing = FIELD_GET(ICE_IEEE_ETS_WILLING_M, buf[0]); + etscfg->cbs = FIELD_GET(ICE_IEEE_ETS_CBS_M, buf[0]); + etscfg->maxtcs = FIELD_GET(ICE_IEEE_ETS_MAXTC_M, buf[0]); /* Begin parsing at Priority Assignment Table (offset 1 in buf) */ ice_parse_ieee_ets_common_tlv(&buf[1], etscfg); @@ -267,11 +262,9 @@ ice_parse_ieee_pfccfg_tlv(struct ice_lldp_org_tlv *tlv, * ----------------------------------------- * |1bit | 1bit|2 bits|4bits| 1 octet | */ - dcbcfg->pfc.willing = ((buf[0] & ICE_IEEE_PFC_WILLING_M) >> - ICE_IEEE_PFC_WILLING_S); - dcbcfg->pfc.mbc = ((buf[0] & ICE_IEEE_PFC_MBC_M) >> ICE_IEEE_PFC_MBC_S); - dcbcfg->pfc.pfccap = ((buf[0] & ICE_IEEE_PFC_CAP_M) >> - ICE_IEEE_PFC_CAP_S); + dcbcfg->pfc.willing = FIELD_GET(ICE_IEEE_PFC_WILLING_M, buf[0]); + dcbcfg->pfc.mbc = FIELD_GET(ICE_IEEE_PFC_MBC_M, buf[0]); + dcbcfg->pfc.pfccap = FIELD_GET(ICE_IEEE_PFC_CAP_M, buf[0]); dcbcfg->pfc.pfcena = buf[1]; } @@ -293,7 +286,7 @@ ice_parse_ieee_app_tlv(struct ice_lldp_org_tlv *tlv, u8 *buf; typelen = ntohs(tlv->typelen); - len = ((typelen & ICE_LLDP_TLV_LEN_M) >> ICE_LLDP_TLV_LEN_S); + len = FIELD_GET(ICE_LLDP_TLV_LEN_M, typelen); buf = tlv->tlvinfo; /* Removing sizeof(ouisubtype) and reserved byte from len. @@ -313,12 +306,10 @@ ice_parse_ieee_app_tlv(struct ice_lldp_org_tlv *tlv, * ----------------------------------------- */ while (offset < len) { - dcbcfg->app[i].priority = ((buf[offset] & - ICE_IEEE_APP_PRIO_M) >> - ICE_IEEE_APP_PRIO_S); - dcbcfg->app[i].selector = ((buf[offset] & - ICE_IEEE_APP_SEL_M) >> - ICE_IEEE_APP_SEL_S); + dcbcfg->app[i].priority = FIELD_GET(ICE_IEEE_APP_PRIO_M, + buf[offset]); + dcbcfg->app[i].selector = FIELD_GET(ICE_IEEE_APP_SEL_M, + buf[offset]); dcbcfg->app[i].prot_id = (buf[offset + 1] << 0x8) | buf[offset + 2]; /* Move to next app */ @@ -346,8 +337,7 @@ ice_parse_ieee_tlv(struct ice_lldp_org_tlv *tlv, struct ice_dcbx_cfg *dcbcfg) u8 subtype; ouisubtype = ntohl(tlv->ouisubtype); - subtype = (u8)((ouisubtype & ICE_LLDP_TLV_SUBTYPE_M) >> - ICE_LLDP_TLV_SUBTYPE_S); + subtype = FIELD_GET(ICE_LLDP_TLV_SUBTYPE_M, ouisubtype); switch (subtype) { case ICE_IEEE_SUBTYPE_ETS_CFG: ice_parse_ieee_etscfg_tlv(tlv, dcbcfg); @@ -398,11 +388,9 @@ ice_parse_cee_pgcfg_tlv(struct ice_cee_feat_tlv *tlv, */ for (i = 0; i < 4; i++) { etscfg->prio_table[i * 2] = - ((buf[offset] & ICE_CEE_PGID_PRIO_1_M) >> - ICE_CEE_PGID_PRIO_1_S); + FIELD_GET(ICE_CEE_PGID_PRIO_1_M, buf[offset]); etscfg->prio_table[i * 2 + 1] = - ((buf[offset] & ICE_CEE_PGID_PRIO_0_M) >> - ICE_CEE_PGID_PRIO_0_S); + FIELD_GET(ICE_CEE_PGID_PRIO_0_M, buf[offset]); offset++; } @@ -465,7 +453,7 @@ ice_parse_cee_app_tlv(struct ice_cee_feat_tlv *tlv, struct ice_dcbx_cfg *dcbcfg) u8 i; typelen = ntohs(tlv->hdr.typelen); - len = ((typelen & ICE_LLDP_TLV_LEN_M) >> ICE_LLDP_TLV_LEN_S); + len = FIELD_GET(ICE_LLDP_TLV_LEN_M, typelen); dcbcfg->numapps = len / sizeof(*app); if (!dcbcfg->numapps) @@ -520,14 +508,13 @@ ice_parse_cee_tlv(struct ice_lldp_org_tlv *tlv, struct ice_dcbx_cfg *dcbcfg) u32 ouisubtype; ouisubtype = ntohl(tlv->ouisubtype); - subtype = (u8)((ouisubtype & ICE_LLDP_TLV_SUBTYPE_M) >> - ICE_LLDP_TLV_SUBTYPE_S); + subtype = FIELD_GET(ICE_LLDP_TLV_SUBTYPE_M, ouisubtype); /* Return if not CEE DCBX */ if (subtype != ICE_CEE_DCBX_TYPE) return; typelen = ntohs(tlv->typelen); - tlvlen = ((typelen & ICE_LLDP_TLV_LEN_M) >> ICE_LLDP_TLV_LEN_S); + tlvlen = FIELD_GET(ICE_LLDP_TLV_LEN_M, typelen); len = sizeof(tlv->typelen) + sizeof(ouisubtype) + sizeof(struct ice_cee_ctrl_tlv); /* Return if no CEE DCBX Feature TLVs */ @@ -539,9 +526,8 @@ ice_parse_cee_tlv(struct ice_lldp_org_tlv *tlv, struct ice_dcbx_cfg *dcbcfg) u16 sublen; typelen = ntohs(sub_tlv->hdr.typelen); - sublen = ((typelen & ICE_LLDP_TLV_LEN_M) >> ICE_LLDP_TLV_LEN_S); - subtype = (u8)((typelen & ICE_LLDP_TLV_TYPE_M) >> - ICE_LLDP_TLV_TYPE_S); + sublen = FIELD_GET(ICE_LLDP_TLV_LEN_M, typelen); + subtype = FIELD_GET(ICE_LLDP_TLV_TYPE_M, typelen); switch (subtype) { case ICE_CEE_SUBTYPE_PG_CFG: ice_parse_cee_pgcfg_tlv(sub_tlv, dcbcfg); @@ -578,7 +564,7 @@ ice_parse_org_tlv(struct ice_lldp_org_tlv *tlv, struct ice_dcbx_cfg *dcbcfg) u32 oui; ouisubtype = ntohl(tlv->ouisubtype); - oui = ((ouisubtype & ICE_LLDP_TLV_OUI_M) >> ICE_LLDP_TLV_OUI_S); + oui = FIELD_GET(ICE_LLDP_TLV_OUI_M, ouisubtype); switch (oui) { case ICE_IEEE_8021QAZ_OUI: ice_parse_ieee_tlv(tlv, dcbcfg); @@ -615,8 +601,8 @@ static int ice_lldp_to_dcb_cfg(u8 *lldpmib, struct ice_dcbx_cfg *dcbcfg) tlv = (struct ice_lldp_org_tlv *)lldpmib; while (1) { typelen = ntohs(tlv->typelen); - type = ((typelen & ICE_LLDP_TLV_TYPE_M) >> ICE_LLDP_TLV_TYPE_S); - len = ((typelen & ICE_LLDP_TLV_LEN_M) >> ICE_LLDP_TLV_LEN_S); + type = FIELD_GET(ICE_LLDP_TLV_TYPE_M, typelen); + len = FIELD_GET(ICE_LLDP_TLV_LEN_M, typelen); offset += sizeof(typelen) + len; /* END TLV or beyond LLDPDU size */ @@ -805,11 +791,11 @@ ice_cee_to_dcb_cfg(struct ice_aqc_get_cee_dcb_cfg_resp *cee_cfg, */ for (i = 0; i < ICE_MAX_TRAFFIC_CLASS / 2; i++) { dcbcfg->etscfg.prio_table[i * 2] = - ((cee_cfg->oper_prio_tc[i] & ICE_CEE_PGID_PRIO_0_M) >> - ICE_CEE_PGID_PRIO_0_S); + FIELD_GET(ICE_CEE_PGID_PRIO_0_M, + cee_cfg->oper_prio_tc[i]); dcbcfg->etscfg.prio_table[i * 2 + 1] = - ((cee_cfg->oper_prio_tc[i] & ICE_CEE_PGID_PRIO_1_M) >> - ICE_CEE_PGID_PRIO_1_S); + FIELD_GET(ICE_CEE_PGID_PRIO_1_M, + cee_cfg->oper_prio_tc[i]); } ice_for_each_traffic_class(i) { @@ -1482,7 +1468,7 @@ ice_dcb_cfg_to_lldp(u8 *lldpmib, u16 *miblen, struct ice_dcbx_cfg *dcbcfg) while (1) { ice_add_dcb_tlv(tlv, dcbcfg, tlvid++); typelen = ntohs(tlv->typelen); - len = (typelen & ICE_LLDP_TLV_LEN_M) >> ICE_LLDP_TLV_LEN_S; + len = FIELD_GET(ICE_LLDP_TLV_LEN_M, typelen); if (len) offset += len + 2; /* END TLV or beyond LLDPDU size */ diff --git a/drivers/net/ethernet/intel/ice/ice_dcb_nl.c b/drivers/net/ethernet/intel/ice/ice_dcb_nl.c index e1fbc6de452d..6d50b90a7359 100644 --- a/drivers/net/ethernet/intel/ice/ice_dcb_nl.c +++ b/drivers/net/ethernet/intel/ice/ice_dcb_nl.c @@ -227,7 +227,7 @@ static void ice_get_pfc_delay(struct ice_hw *hw, u16 *delay) u32 val; val = rd32(hw, PRTDCB_GENC); - *delay = (u16)((val & PRTDCB_GENC_PFCLDA_M) >> PRTDCB_GENC_PFCLDA_S); + *delay = FIELD_GET(PRTDCB_GENC_PFCLDA_M, val); } /** diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c b/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c index d151e5bacfec..7886fa1a0e1d 100644 --- a/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c +++ b/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c @@ -507,8 +507,7 @@ ice_parse_rx_flow_user_data(struct ethtool_rx_flow_spec *fsp, return -EINVAL; data->flex_word = value & ICE_USERDEF_FLEX_WORD_M; - data->flex_offset = (value & ICE_USERDEF_FLEX_OFFS_M) >> - ICE_USERDEF_FLEX_OFFS_S; + data->flex_offset = FIELD_GET(ICE_USERDEF_FLEX_OFFS_M, value); if (data->flex_offset > ICE_USERDEF_FLEX_MAX_OFFS_VAL) return -EINVAL; diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index 453eba59abb2..8cdd53412748 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -974,9 +974,8 @@ static void ice_set_dflt_vsi_ctx(struct ice_hw *hw, struct ice_vsi_ctx *ctxt) /* Traffic from VSI can be sent to LAN */ ctxt->info.sw_flags2 = ICE_AQ_VSI_SW_FLAG_LAN_ENA; /* allow all untagged/tagged packets by default on Tx */ - ctxt->info.inner_vlan_flags = ((ICE_AQ_VSI_INNER_VLAN_TX_MODE_ALL & - ICE_AQ_VSI_INNER_VLAN_TX_MODE_M) >> - ICE_AQ_VSI_INNER_VLAN_TX_MODE_S); + ctxt->info.inner_vlan_flags = FIELD_GET(ICE_AQ_VSI_INNER_VLAN_TX_MODE_M, + ICE_AQ_VSI_INNER_VLAN_TX_MODE_ALL); /* SVM - by default bits 3 and 4 in inner_vlan_flags are 0's which * results in legacy behavior (show VLAN, DEI, and UP) in descriptor. * diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 1f159b4362ec..c7c6ec3e131b 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -980,7 +980,7 @@ static void ice_set_dflt_mib(struct ice_pf *pf) * Octets 13 - 20 are TSA values - leave as zeros */ buf[5] = 0x64; - len = (typelen & ICE_LLDP_TLV_LEN_M) >> ICE_LLDP_TLV_LEN_S; + len = FIELD_GET(ICE_LLDP_TLV_LEN_M, typelen); offset += len + 2; tlv = (struct ice_lldp_org_tlv *) ((char *)tlv + sizeof(tlv->typelen) + len); @@ -1014,7 +1014,7 @@ static void ice_set_dflt_mib(struct ice_pf *pf) /* Octet 1 left as all zeros - PFC disabled */ buf[0] = 0x08; - len = (typelen & ICE_LLDP_TLV_LEN_M) >> ICE_LLDP_TLV_LEN_S; + len = FIELD_GET(ICE_LLDP_TLV_LEN_M, typelen); offset += len + 2; if (ice_aq_set_lldp_mib(hw, mib_type, (void *)lldpmib, offset, NULL)) @@ -1745,14 +1745,10 @@ static void ice_handle_mdd_event(struct ice_pf *pf) /* find what triggered an MDD event */ reg = rd32(hw, GL_MDET_TX_PQM); if (reg & GL_MDET_TX_PQM_VALID_M) { - u8 pf_num = (reg & GL_MDET_TX_PQM_PF_NUM_M) >> - GL_MDET_TX_PQM_PF_NUM_S; - u16 vf_num = (reg & GL_MDET_TX_PQM_VF_NUM_M) >> - GL_MDET_TX_PQM_VF_NUM_S; - u8 event = (reg & GL_MDET_TX_PQM_MAL_TYPE_M) >> - GL_MDET_TX_PQM_MAL_TYPE_S; - u16 queue = ((reg & GL_MDET_TX_PQM_QNUM_M) >> - GL_MDET_TX_PQM_QNUM_S); + u8 pf_num = FIELD_GET(GL_MDET_TX_PQM_PF_NUM_M, reg); + u16 vf_num = FIELD_GET(GL_MDET_TX_PQM_VF_NUM_M, reg); + u8 event = FIELD_GET(GL_MDET_TX_PQM_MAL_TYPE_M, reg); + u16 queue = FIELD_GET(GL_MDET_TX_PQM_QNUM_M, reg); if (netif_msg_tx_err(pf)) dev_info(dev, "Malicious Driver Detection event %d on TX queue %d PF# %d VF# %d\n", @@ -1762,14 +1758,10 @@ static void ice_handle_mdd_event(struct ice_pf *pf) reg = rd32(hw, GL_MDET_TX_TCLAN_BY_MAC(hw)); if (reg & GL_MDET_TX_TCLAN_VALID_M) { - u8 pf_num = (reg & GL_MDET_TX_TCLAN_PF_NUM_M) >> - GL_MDET_TX_TCLAN_PF_NUM_S; - u16 vf_num = (reg & GL_MDET_TX_TCLAN_VF_NUM_M) >> - GL_MDET_TX_TCLAN_VF_NUM_S; - u8 event = (reg & GL_MDET_TX_TCLAN_MAL_TYPE_M) >> - GL_MDET_TX_TCLAN_MAL_TYPE_S; - u16 queue = ((reg & GL_MDET_TX_TCLAN_QNUM_M) >> - GL_MDET_TX_TCLAN_QNUM_S); + u8 pf_num = FIELD_GET(GL_MDET_TX_TCLAN_PF_NUM_M, reg); + u16 vf_num = FIELD_GET(GL_MDET_TX_TCLAN_VF_NUM_M, reg); + u8 event = FIELD_GET(GL_MDET_TX_TCLAN_MAL_TYPE_M, reg); + u16 queue = FIELD_GET(GL_MDET_TX_TCLAN_QNUM_M, reg); if (netif_msg_tx_err(pf)) dev_info(dev, "Malicious Driver Detection event %d on TX queue %d PF# %d VF# %d\n", @@ -1779,14 +1771,10 @@ static void ice_handle_mdd_event(struct ice_pf *pf) reg = rd32(hw, GL_MDET_RX); if (reg & GL_MDET_RX_VALID_M) { - u8 pf_num = (reg & GL_MDET_RX_PF_NUM_M) >> - GL_MDET_RX_PF_NUM_S; - u16 vf_num = (reg & GL_MDET_RX_VF_NUM_M) >> - GL_MDET_RX_VF_NUM_S; - u8 event = (reg & GL_MDET_RX_MAL_TYPE_M) >> - GL_MDET_RX_MAL_TYPE_S; - u16 queue = ((reg & GL_MDET_RX_QNUM_M) >> - GL_MDET_RX_QNUM_S); + u8 pf_num = FIELD_GET(GL_MDET_RX_PF_NUM_M, reg); + u16 vf_num = FIELD_GET(GL_MDET_RX_VF_NUM_M, reg); + u8 event = FIELD_GET(GL_MDET_RX_MAL_TYPE_M, reg); + u16 queue = FIELD_GET(GL_MDET_RX_QNUM_M, reg); if (netif_msg_rx_err(pf)) dev_info(dev, "Malicious Driver Detection event %d on RX queue %d PF# %d VF# %d\n", @@ -3117,8 +3105,8 @@ static irqreturn_t ice_misc_intr(int __always_unused irq, void *data) /* we have a reset warning */ ena_mask &= ~PFINT_OICR_GRST_M; - reset = (rd32(hw, GLGEN_RSTAT) & GLGEN_RSTAT_RESET_TYPE_M) >> - GLGEN_RSTAT_RESET_TYPE_S; + reset = FIELD_GET(GLGEN_RSTAT_RESET_TYPE_M, + rd32(hw, GLGEN_RSTAT)); if (reset == ICE_RESET_CORER) pf->corer_count++; @@ -7904,8 +7892,8 @@ static void ice_tx_timeout(struct net_device *netdev, unsigned int txqueue) struct ice_hw *hw = &pf->hw; u32 head, val = 0; - head = (rd32(hw, QTX_COMM_HEAD(vsi->txq_map[txqueue])) & - QTX_COMM_HEAD_HEAD_M) >> QTX_COMM_HEAD_HEAD_S; + head = FIELD_GET(QTX_COMM_HEAD_HEAD_M, + rd32(hw, QTX_COMM_HEAD(vsi->txq_map[txqueue]))); /* Read interrupt register */ val = rd32(hw, GLINT_DYN_CTL(tx_ring->q_vector->reg_idx)); diff --git a/drivers/net/ethernet/intel/ice/ice_nvm.c b/drivers/net/ethernet/intel/ice/ice_nvm.c index f6f52a248066..d4e05d2cb30c 100644 --- a/drivers/net/ethernet/intel/ice/ice_nvm.c +++ b/drivers/net/ethernet/intel/ice/ice_nvm.c @@ -571,8 +571,8 @@ ice_get_nvm_ver_info(struct ice_hw *hw, enum ice_bank_select bank, struct ice_nv return status; } - nvm->major = (ver & ICE_NVM_VER_HI_MASK) >> ICE_NVM_VER_HI_SHIFT; - nvm->minor = (ver & ICE_NVM_VER_LO_MASK) >> ICE_NVM_VER_LO_SHIFT; + nvm->major = FIELD_GET(ICE_NVM_VER_HI_MASK, ver); + nvm->minor = FIELD_GET(ICE_NVM_VER_LO_MASK, ver); status = ice_read_nvm_sr_copy(hw, bank, ICE_SR_NVM_EETRACK_LO, &eetrack_lo); if (status) { @@ -706,9 +706,9 @@ ice_get_orom_ver_info(struct ice_hw *hw, enum ice_bank_select bank, struct ice_o combo_ver = le32_to_cpu(civd.combo_ver); - orom->major = (u8)((combo_ver & ICE_OROM_VER_MASK) >> ICE_OROM_VER_SHIFT); - orom->patch = (u8)(combo_ver & ICE_OROM_VER_PATCH_MASK); - orom->build = (u16)((combo_ver & ICE_OROM_VER_BUILD_MASK) >> ICE_OROM_VER_BUILD_SHIFT); + orom->major = FIELD_GET(ICE_OROM_VER_MASK, combo_ver); + orom->patch = FIELD_GET(ICE_OROM_VER_PATCH_MASK, combo_ver); + orom->build = FIELD_GET(ICE_OROM_VER_BUILD_MASK, combo_ver); return 0; } @@ -950,7 +950,8 @@ static int ice_determine_active_flash_banks(struct ice_hw *hw) } /* Check that the control word indicates validity */ - if ((ctrl_word & ICE_SR_CTRL_WORD_1_M) >> ICE_SR_CTRL_WORD_1_S != ICE_SR_CTRL_WORD_VALID) { + if (FIELD_GET(ICE_SR_CTRL_WORD_1_M, ctrl_word) != + ICE_SR_CTRL_WORD_VALID) { ice_debug(hw, ICE_DBG_NVM, "Shadow RAM control word is invalid\n"); return -EIO; } @@ -1027,7 +1028,7 @@ int ice_init_nvm(struct ice_hw *hw) * as the blank mode may be used in the factory line. */ gens_stat = rd32(hw, GLNVM_GENS); - sr_size = (gens_stat & GLNVM_GENS_SR_SIZE_M) >> GLNVM_GENS_SR_SIZE_S; + sr_size = FIELD_GET(GLNVM_GENS_SR_SIZE_M, gens_stat); /* Switching to words (sr_size contains power of 2) */ flash->sr_words = BIT(sr_size) * ICE_SR_WORDS_IN_1KB; diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.c b/drivers/net/ethernet/intel/ice/ice_ptp.c index 42458554d2a7..266ea809ba3d 100644 --- a/drivers/net/ethernet/intel/ice/ice_ptp.c +++ b/drivers/net/ethernet/intel/ice/ice_ptp.c @@ -1140,9 +1140,9 @@ static int ice_ptp_check_tx_fifo(struct ice_ptp_port *port) } if (offs & 0x1) - phy_sts = (val & Q_REG_FIFO13_M) >> Q_REG_FIFO13_S; + phy_sts = FIELD_GET(Q_REG_FIFO13_M, val); else - phy_sts = (val & Q_REG_FIFO02_M) >> Q_REG_FIFO02_S; + phy_sts = FIELD_GET(Q_REG_FIFO02_M, val); if (phy_sts & FIFO_EMPTY) { port->tx_fifo_busy_cnt = FIFO_OK; diff --git a/drivers/net/ethernet/intel/ice/ice_sched.c b/drivers/net/ethernet/intel/ice/ice_sched.c index 2f4a621254e8..d174a4eeb899 100644 --- a/drivers/net/ethernet/intel/ice/ice_sched.c +++ b/drivers/net/ethernet/intel/ice/ice_sched.c @@ -1387,8 +1387,7 @@ void ice_sched_get_psm_clk_freq(struct ice_hw *hw) u32 val, clk_src; val = rd32(hw, GLGEN_CLKSTAT_SRC); - clk_src = (val & GLGEN_CLKSTAT_SRC_PSM_CLK_SRC_M) >> - GLGEN_CLKSTAT_SRC_PSM_CLK_SRC_S; + clk_src = FIELD_GET(GLGEN_CLKSTAT_SRC_PSM_CLK_SRC_M, val); #define PSM_CLK_SRC_367_MHZ 0x0 #define PSM_CLK_SRC_416_MHZ 0x1 diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.c b/drivers/net/ethernet/intel/ice/ice_sriov.c index 54d602388c9c..4ee349fe6409 100644 --- a/drivers/net/ethernet/intel/ice/ice_sriov.c +++ b/drivers/net/ethernet/intel/ice/ice_sriov.c @@ -1318,8 +1318,7 @@ ice_vf_lan_overflow_event(struct ice_pf *pf, struct ice_rq_event_info *event) dev_dbg(ice_pf_to_dev(pf), "GLDCB_RTCTQ: 0x%08x\n", gldcb_rtctq); /* event returns device global Rx queue number */ - queue = (gldcb_rtctq & GLDCB_RTCTQ_RXQNUM_M) >> - GLDCB_RTCTQ_RXQNUM_S; + queue = FIELD_GET(GLDCB_RTCTQ_RXQNUM_M, gldcb_rtctq); vf = ice_get_vf_from_pfq(pf, ice_globalq_to_pfq(pf, queue)); if (!vf) diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl.c b/drivers/net/ethernet/intel/ice/ice_virtchnl.c index 727aebe24b92..0a918db3a59a 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl.c @@ -3019,7 +3019,7 @@ static struct ice_vlan ice_vc_to_vlan(struct virtchnl_vlan *vc_vlan) { struct ice_vlan vlan = { 0 }; - vlan.prio = (vc_vlan->tci & VLAN_PRIO_MASK) >> VLAN_PRIO_SHIFT; + vlan.prio = FIELD_GET(VLAN_PRIO_MASK, vc_vlan->tci); vlan.vid = vc_vlan->tci & VLAN_VID_MASK; vlan.tpid = vc_vlan->tpid; diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c index e62104f895a1..e5bcd3f26141 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c @@ -1480,16 +1480,15 @@ ice_vf_verify_rx_desc(struct ice_vf *vf, struct ice_vf_fdir_ctx *ctx, int ret; stat_err = le16_to_cpu(ctx->rx_desc.wb.status_error0); - if (((stat_err & ICE_FXD_FLTR_WB_QW1_DD_M) >> - ICE_FXD_FLTR_WB_QW1_DD_S) != ICE_FXD_FLTR_WB_QW1_DD_YES) { + if (FIELD_GET(ICE_FXD_FLTR_WB_QW1_DD_M, stat_err) != + ICE_FXD_FLTR_WB_QW1_DD_YES) { *status = VIRTCHNL_FDIR_FAILURE_RULE_NORESOURCE; dev_err(dev, "VF %d: Desc Done not set\n", vf->vf_id); ret = -EINVAL; goto err_exit; } - prog_id = (stat_err & ICE_FXD_FLTR_WB_QW1_PROG_ID_M) >> - ICE_FXD_FLTR_WB_QW1_PROG_ID_S; + prog_id = FIELD_GET(ICE_FXD_FLTR_WB_QW1_PROG_ID_M, stat_err); if (prog_id == ICE_FXD_FLTR_WB_QW1_PROG_ADD && ctx->v_opcode != VIRTCHNL_OP_ADD_FDIR_FILTER) { dev_err(dev, "VF %d: Desc show add, but ctx not", @@ -1508,8 +1507,7 @@ ice_vf_verify_rx_desc(struct ice_vf *vf, struct ice_vf_fdir_ctx *ctx, goto err_exit; } - error = (stat_err & ICE_FXD_FLTR_WB_QW1_FAIL_M) >> - ICE_FXD_FLTR_WB_QW1_FAIL_S; + error = FIELD_GET(ICE_FXD_FLTR_WB_QW1_FAIL_M, stat_err); if (error == ICE_FXD_FLTR_WB_QW1_FAIL_YES) { if (prog_id == ICE_FXD_FLTR_WB_QW1_PROG_ADD) { dev_err(dev, "VF %d, Failed to add FDIR rule due to no space in the table", @@ -1524,8 +1522,7 @@ ice_vf_verify_rx_desc(struct ice_vf *vf, struct ice_vf_fdir_ctx *ctx, goto err_exit; } - error = (stat_err & ICE_FXD_FLTR_WB_QW1_FAIL_PROF_M) >> - ICE_FXD_FLTR_WB_QW1_FAIL_PROF_S; + error = FIELD_GET(ICE_FXD_FLTR_WB_QW1_FAIL_PROF_M, stat_err); if (error == ICE_FXD_FLTR_WB_QW1_FAIL_PROF_YES) { dev_err(dev, "VF %d: Profile matching error", vf->vf_id); *status = VIRTCHNL_FDIR_FAILURE_RULE_NORESOURCE; From patchwork Wed Dec 6 01:01:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jesse Brandeburg X-Patchwork-Id: 13480919 X-Patchwork-Delegate: kuba@kernel.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="GWIosq7r" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C4451122 for ; Tue, 5 Dec 2023 17:01:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1701824503; x=1733360503; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=DAG53SlvXiEznWpH1OLe1Us5lOc1W2k2kq3sL4iCNd4=; b=GWIosq7rkfWa96vjmekQzKWj7IhnH04nGhdAqt/TU+lytGi0QufHHBAf Onko6q7/PUkcyWV7akzoyI43na1JEu5v8FoqmtrKZw0QuyDlu0QwK8kug B0k7ZfG/IZalv+5wfkhQ601KCfPwHCEnmiGvltoMNLywOcoF2Whqnfe5M Ti0eOXEG63PT5fAFCPouNiGqbntNlpAQSJJidwGvOj051Lgoq1/jegbYp kpczZM0eLIrkua/XvPhHjYkWmbrhOdv3hioQqq14Jck9VVCZfr/5qgywl waA8Ds8C/hKSitAfBoYQhlqcTyHRFks9YZNMJVzyUEnEqAvewnSZpSjJE A==; X-IronPort-AV: E=McAfee;i="6600,9927,10915"; a="12700332" X-IronPort-AV: E=Sophos;i="6.04,254,1695711600"; d="scan'208";a="12700332" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Dec 2023 17:01:37 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10915"; a="841655292" X-IronPort-AV: E=Sophos;i="6.04,254,1695711600"; d="scan'208";a="841655292" Received: from jbrandeb-spr1.jf.intel.com ([10.166.28.233]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Dec 2023 17:01:35 -0800 From: Jesse Brandeburg To: intel-wired-lan@lists.osuosl.org Cc: Jesse Brandeburg , netdev@vger.kernel.org, aleksander.lobakin@intel.com, przemyslaw.kitszel@intel.com, horms@kernel.org, marcin.szycik@linux.intel.com Subject: [PATCH iwl-next v2 14/15] ice: cleanup inconsistent code Date: Tue, 5 Dec 2023 17:01:13 -0800 Message-Id: <20231206010114.2259388-15-jesse.brandeburg@intel.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20231206010114.2259388-1-jesse.brandeburg@intel.com> References: <20231206010114.2259388-1-jesse.brandeburg@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org It was found while doing further testing of the previous commit fbf32a9bab91 ("ice: field get conversion") that one of the FIELD_GET conversions should really be a FIELD_PREP. The previous code was styled as a match to the FIELD_GET conversion, which always worked because the shift value was 0. The code makes way more sense as a FIELD_PREP and was in fact the only FIELD_GET with two constant arguments in this series. Didn't squash this patch to make it easier to call out the (non-impactful) bug. Signed-off-by: Jesse Brandeburg Tested-by: Pucha Himasekhar Reddy (A Contingent worker at Intel) Reviewed-by: Simon Horman --- drivers/net/ethernet/intel/ice/ice_dcb.c | 2 +- drivers/net/ethernet/intel/ice/ice_lib.c | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_dcb.c b/drivers/net/ethernet/intel/ice/ice_dcb.c index 7f3e00c187b4..74418c445cc4 100644 --- a/drivers/net/ethernet/intel/ice/ice_dcb.c +++ b/drivers/net/ethernet/intel/ice/ice_dcb.c @@ -967,7 +967,7 @@ void ice_get_dcb_cfg_from_mib_change(struct ice_port_info *pi, mib = (struct ice_aqc_lldp_get_mib *)&event->desc.params.raw; - change_type = FIELD_GET(ICE_AQ_LLDP_MIB_TYPE_M, mib->type); + change_type = FIELD_GET(ICE_AQ_LLDP_MIB_TYPE_M, mib->type); if (change_type == ICE_AQ_LLDP_MIB_REMOTE) dcbx_cfg = &pi->qos_cfg.remote_dcbx_cfg; diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index 8cdd53412748..d1c1e53fe15c 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -974,8 +974,8 @@ static void ice_set_dflt_vsi_ctx(struct ice_hw *hw, struct ice_vsi_ctx *ctxt) /* Traffic from VSI can be sent to LAN */ ctxt->info.sw_flags2 = ICE_AQ_VSI_SW_FLAG_LAN_ENA; /* allow all untagged/tagged packets by default on Tx */ - ctxt->info.inner_vlan_flags = FIELD_GET(ICE_AQ_VSI_INNER_VLAN_TX_MODE_M, - ICE_AQ_VSI_INNER_VLAN_TX_MODE_ALL); + ctxt->info.inner_vlan_flags = FIELD_PREP(ICE_AQ_VSI_INNER_VLAN_TX_MODE_M, + ICE_AQ_VSI_INNER_VLAN_TX_MODE_ALL); /* SVM - by default bits 3 and 4 in inner_vlan_flags are 0's which * results in legacy behavior (show VLAN, DEI, and UP) in descriptor. * From patchwork Wed Dec 6 01:01:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jesse Brandeburg X-Patchwork-Id: 13480924 X-Patchwork-Delegate: kuba@kernel.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="PI2lc6YM" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7BD34181 for ; Tue, 5 Dec 2023 17:01:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1701824504; x=1733360504; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=mnZASE63x34yCq6dzY8hV2ZsE25/tYf36JpzHzX+6rQ=; b=PI2lc6YMcLdZwrvZyyrAbSFo3roM/6NVmQdMtpKCIGcjWZ7ZZ5MUHyXu /wi5wk4wgDIkBc22jU3GtZB7x/CajKjUljWf4VU3+tVebnZhPm0M7YAuZ mu9OC8jveixpivVbnDm243FD6ZpeZ0DLMFDSBrPdWIXZPDQNN6/iVVYal 6Uf9HIzKcILdldjDUxunebu0cWv4JqX11VrUSxIJI/xd+PW1LiQXeNhWA +4FqV1itbE/eHaZCqsxmtfRssYzm1FeigGKjkiTvNKwR5WkuJXgfz/N7c KLkPUb8TJPUK2e02faK1Z/YvTEQAc39/AULXxp1zDUX2nJV/YmvmYicC8 A==; X-IronPort-AV: E=McAfee;i="6600,9927,10915"; a="12700338" X-IronPort-AV: E=Sophos;i="6.04,254,1695711600"; d="scan'208";a="12700338" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Dec 2023 17:01:37 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10915"; a="841655297" X-IronPort-AV: E=Sophos;i="6.04,254,1695711600"; d="scan'208";a="841655297" Received: from jbrandeb-spr1.jf.intel.com ([10.166.28.233]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Dec 2023 17:01:36 -0800 From: Jesse Brandeburg To: intel-wired-lan@lists.osuosl.org Cc: Jesse Brandeburg , netdev@vger.kernel.org, aleksander.lobakin@intel.com, przemyslaw.kitszel@intel.com, horms@kernel.org, marcin.szycik@linux.intel.com Subject: [PATCH iwl-next v2 15/15] idpf: refactor some missing field get/prep conversions Date: Tue, 5 Dec 2023 17:01:14 -0800 Message-Id: <20231206010114.2259388-16-jesse.brandeburg@intel.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20231206010114.2259388-1-jesse.brandeburg@intel.com> References: <20231206010114.2259388-1-jesse.brandeburg@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Most of idpf correctly uses FIELD_GET and FIELD_PREP, but a couple spots were missed so fix those. Automated conversion with coccinelle script and manually fixed up, including audits for opportunities to convert to {get,encode,replace} bits functions. Add conversions to le16_get/encode/replace_bits where appropriate. And in one place fix up a cast from a u16 to a u16. @prep2@ constant shift,mask; type T; expression a; @@ -(((T)(a) << shift) & mask) +FIELD_PREP(mask, a) @prep@ constant shift,mask; type T; expression a; @@ -((T)((a) << shift) & mask) +FIELD_PREP(mask, a) @get@ constant shift,mask; type T; expression a; @@ -((T)((a) & mask) >> shift) +FIELD_GET(mask, a) and applied via: spatch --sp-file field_prep.cocci --in-place --dir \ drivers/net/ethernet/intel/ CC: Alexander Lobakin Reviewed-by: Przemek Kitszel Signed-off-by: Jesse Brandeburg Tested-by: Scott Register Reviewed-by: Simon Horman --- v2: merged this patch into larger series, modified after Olek's comments to include bits encoding where changing lines for prep or get. --- .../ethernet/intel/idpf/idpf_singleq_txrx.c | 7 +-- drivers/net/ethernet/intel/idpf/idpf_txrx.c | 58 +++++++++---------- 2 files changed, 30 insertions(+), 35 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c index 81288a17da2a..447753495c53 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c @@ -328,10 +328,9 @@ static void idpf_tx_singleq_build_ctx_desc(struct idpf_queue *txq, if (offload->tso_segs) { qw1 |= IDPF_TX_CTX_DESC_TSO << IDPF_TXD_CTX_QW1_CMD_S; - qw1 |= ((u64)offload->tso_len << IDPF_TXD_CTX_QW1_TSO_LEN_S) & - IDPF_TXD_CTX_QW1_TSO_LEN_M; - qw1 |= ((u64)offload->mss << IDPF_TXD_CTX_QW1_MSS_S) & - IDPF_TXD_CTX_QW1_MSS_M; + qw1 |= FIELD_PREP(IDPF_TXD_CTX_QW1_TSO_LEN_M, + offload->tso_len); + qw1 |= FIELD_PREP(IDPF_TXD_CTX_QW1_MSS_M, offload->mss); u64_stats_update_begin(&txq->stats_sync); u64_stats_inc(&txq->q_stats.tx.lso_pkts); diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c index 1f728a9004d9..725f2477f979 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -505,9 +505,9 @@ static void idpf_rx_post_buf_refill(struct idpf_sw_queue *refillq, u16 buf_id) /* store the buffer ID and the SW maintained GEN bit to the refillq */ refillq->ring[nta] = - ((buf_id << IDPF_RX_BI_BUFID_S) & IDPF_RX_BI_BUFID_M) | - (!!(test_bit(__IDPF_Q_GEN_CHK, refillq->flags)) << - IDPF_RX_BI_GEN_S); + FIELD_PREP(IDPF_RX_BI_BUFID_M, buf_id) | + FIELD_PREP(IDPF_RX_BI_GEN_M, + test_bit(__IDPF_Q_GEN_CHK, refillq->flags)); if (unlikely(++nta == refillq->desc_count)) { nta = 0; @@ -1825,14 +1825,14 @@ static bool idpf_tx_clean_complq(struct idpf_queue *complq, int budget, u16 gen; /* if the descriptor isn't done, no work yet to do */ - gen = (le16_to_cpu(tx_desc->qid_comptype_gen) & - IDPF_TXD_COMPLQ_GEN_M) >> IDPF_TXD_COMPLQ_GEN_S; + gen = le16_get_bits(tx_desc->qid_comptype_gen, + IDPF_TXD_COMPLQ_GEN_M); if (test_bit(__IDPF_Q_GEN_CHK, complq->flags) != gen) break; /* Find necessary info of TX queue to clean buffers */ - rel_tx_qid = (le16_to_cpu(tx_desc->qid_comptype_gen) & - IDPF_TXD_COMPLQ_QID_M) >> IDPF_TXD_COMPLQ_QID_S; + rel_tx_qid = le16_get_bits(tx_desc->qid_comptype_gen, + IDPF_TXD_COMPLQ_QID_M); if (rel_tx_qid >= complq->txq_grp->num_txq || !complq->txq_grp->txqs[rel_tx_qid]) { dev_err(&complq->vport->adapter->pdev->dev, @@ -1842,9 +1842,8 @@ static bool idpf_tx_clean_complq(struct idpf_queue *complq, int budget, tx_q = complq->txq_grp->txqs[rel_tx_qid]; /* Determine completion type */ - ctype = (le16_to_cpu(tx_desc->qid_comptype_gen) & - IDPF_TXD_COMPLQ_COMPL_TYPE_M) >> - IDPF_TXD_COMPLQ_COMPL_TYPE_S; + ctype = le16_get_bits(tx_desc->qid_comptype_gen, + IDPF_TXD_COMPLQ_COMPL_TYPE_M); switch (ctype) { case IDPF_TXD_COMPLT_RE: hw_head = le16_to_cpu(tx_desc->q_head_compl_tag.q_head); @@ -1945,11 +1944,10 @@ void idpf_tx_splitq_build_ctb(union idpf_tx_flex_desc *desc, u16 td_cmd, u16 size) { desc->q.qw1.cmd_dtype = - cpu_to_le16(params->dtype & IDPF_FLEX_TXD_QW1_DTYPE_M); + le16_encode_bits(params->dtype, IDPF_FLEX_TXD_QW1_DTYPE_M); desc->q.qw1.cmd_dtype |= - cpu_to_le16((td_cmd << IDPF_FLEX_TXD_QW1_CMD_S) & - IDPF_FLEX_TXD_QW1_CMD_M); - desc->q.qw1.buf_size = cpu_to_le16((u16)size); + le16_encode_bits(td_cmd, IDPF_FLEX_TXD_QW1_CMD_M); + desc->q.qw1.buf_size = cpu_to_le16(size); desc->q.qw1.l2tags.l2tag1 = cpu_to_le16(params->td_tag); } @@ -2843,8 +2841,9 @@ static void idpf_rx_splitq_extract_csum_bits(struct virtchnl2_rx_flex_desc_adv_n qword1); csum->ipv6exadd = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_IPV6EXADD_M, qword0); - csum->raw_csum_inv = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_ADV_RAW_CSUM_INV_M, - le16_to_cpu(rx_desc->ptype_err_fflags0)); + csum->raw_csum_inv = + le16_get_bits(rx_desc->ptype_err_fflags0, + VIRTCHNL2_RX_FLEX_DESC_ADV_RAW_CSUM_INV_M); csum->raw_csum = le16_to_cpu(rx_desc->misc.raw_cs); } @@ -2938,8 +2937,8 @@ static int idpf_rx_process_skb_fields(struct idpf_queue *rxq, struct idpf_rx_ptype_decoded decoded; u16 rx_ptype; - rx_ptype = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M, - le16_to_cpu(rx_desc->ptype_err_fflags0)); + rx_ptype = le16_get_bits(rx_desc->ptype_err_fflags0, + VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M); decoded = rxq->vport->rx_ptype_lkup[rx_ptype]; /* If we don't know the ptype we can't do anything else with it. Just @@ -2953,8 +2952,8 @@ static int idpf_rx_process_skb_fields(struct idpf_queue *rxq, skb->protocol = eth_type_trans(skb, rxq->vport->netdev); - if (FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_M, - le16_to_cpu(rx_desc->hdrlen_flags))) + if (le16_get_bits(rx_desc->hdrlen_flags, + VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_M)) return idpf_rx_rsc(rxq, skb, rx_desc, &decoded); idpf_rx_splitq_extract_csum_bits(rx_desc, &csum_bits); @@ -3148,8 +3147,8 @@ static int idpf_rx_splitq_clean(struct idpf_queue *rxq, int budget) dma_rmb(); /* if the descriptor isn't done, no work yet to do */ - gen_id = le16_to_cpu(rx_desc->pktlen_gen_bufq_id); - gen_id = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_M, gen_id); + gen_id = le16_get_bits(rx_desc->pktlen_gen_bufq_id, + VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_M); if (test_bit(__IDPF_Q_GEN_CHK, rxq->flags) != gen_id) break; @@ -3164,9 +3163,8 @@ static int idpf_rx_splitq_clean(struct idpf_queue *rxq, int budget) continue; } - pkt_len = le16_to_cpu(rx_desc->pktlen_gen_bufq_id); - pkt_len = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_M, - pkt_len); + pkt_len = le16_get_bits(rx_desc->pktlen_gen_bufq_id, + VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_M); hbo = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_HBO_M, rx_desc->status_err0_qw1); @@ -3183,14 +3181,12 @@ static int idpf_rx_splitq_clean(struct idpf_queue *rxq, int budget) goto bypass_hsplit; } - hdr_len = le16_to_cpu(rx_desc->hdrlen_flags); - hdr_len = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_HDR_M, - hdr_len); + hdr_len = le16_get_bits(rx_desc->hdrlen_flags, + VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_HDR_M); bypass_hsplit: - bufq_id = le16_to_cpu(rx_desc->pktlen_gen_bufq_id); - bufq_id = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_M, - bufq_id); + bufq_id = le16_get_bits(rx_desc->pktlen_gen_bufq_id, + VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_M); rxq_set = container_of(rxq, struct idpf_rxq_set, rxq); if (!bufq_id)