From patchwork Tue Nov 21 21:19:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jesse Brandeburg X-Patchwork-Id: 13463590 X-Patchwork-Delegate: kuba@kernel.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="i3SAti/T" Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C268219E for ; Tue, 21 Nov 2023 13:19:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1700601586; x=1732137586; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ps0GNGg5kDvae5OYa4LXFr0lMUHpXWfhHf2T5oSdGts=; b=i3SAti/Tpyb+dCjj9ly7YfJ27+hOBVA+KPn5bMQK9pHO8vR4vywYvE3S ty9xqo0luMHGtTXDX1o++9x+c9SUgc36tgJ2OMUOo6zG9FU/+uhURmB1I c1sWvZEe6rXHdmgb7z1ZB3i8jPf7kRkwmn1ApLp6xujXuqZy//vL6lBRA ie5ojYcJieHExgUDxijTODUGDgwYLzW/RcbySwbEzgDwucftCs2myIAAo U/ge91HxNP5R/ePKCsO75btS8ppZgyHJjJGxSWvzQ+HxjHzH6FC84Zt3R aQ2/ui+AZE/P/wawjV3yDIRlAisEj0oCjkS/9WiaA6tqimSmTPixaT5t+ g==; X-IronPort-AV: E=McAfee;i="6600,9927,10901"; a="423022097" X-IronPort-AV: E=Sophos;i="6.04,216,1695711600"; d="scan'208";a="423022097" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Nov 2023 13:19:40 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10901"; a="716630566" X-IronPort-AV: E=Sophos;i="6.04,216,1695711600"; d="scan'208";a="716630566" Received: from jbrandeb-spr1.jf.intel.com ([10.166.28.233]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Nov 2023 13:19:38 -0800 From: Jesse Brandeburg To: intel-wired-lan@lists.osuosl.org Cc: Jesse Brandeburg , netdev@vger.kernel.org, Julia Lawall , Marcin Szycik Subject: [PATCH iwl-next v1 12/13] iavf: field get conversion Date: Tue, 21 Nov 2023 13:19:20 -0800 Message-Id: <20231121211921.19834-13-jesse.brandeburg@intel.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20231121211921.19834-1-jesse.brandeburg@intel.com> References: <20231121211921.19834-1-jesse.brandeburg@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Refactor the iavf driver to use FIELD_GET() for mask and shift reads, which reduces lines of code and adds clarity of intent. This code was generated by the following coccinelle/spatch script and then manually repaired in a later patch. @get@ constant shift,mask; expression a; @@ -((a & mask) >> shift) +FIELD_GET(mask, a) and applied via: spatch --sp-file field_prep.cocci --in-place --dir \ drivers/net/ethernet/intel/ Cc: Julia Lawall Reviewed-by: Marcin Szycik Signed-off-by: Jesse Brandeburg Reviewed-by: Simon Horman --- .../net/ethernet/intel/iavf/iavf_ethtool.c | 3 +-- drivers/net/ethernet/intel/iavf/iavf_txrx.c | 20 +++++++------------ 2 files changed, 8 insertions(+), 15 deletions(-) diff --git a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c index 11150bdc63d0..90d8f1fcc3aa 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c +++ b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c @@ -1020,8 +1020,7 @@ iavf_parse_rx_flow_user_data(struct ethtool_rx_flow_spec *fsp, #define IAVF_USERDEF_FLEX_MAX_OFFS_VAL 504 flex = &fltr->flex_words[cnt++]; flex->word = value & IAVF_USERDEF_FLEX_WORD_M; - flex->offset = (value & IAVF_USERDEF_FLEX_OFFS_M) >> - IAVF_USERDEF_FLEX_OFFS_S; + flex->offset = FIELD_GET(IAVF_USERDEF_FLEX_OFFS_M, value); if (flex->offset > IAVF_USERDEF_FLEX_MAX_OFFS_VAL) return -EINVAL; } diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c index fb7edba9c2f8..b71484c87a84 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c @@ -989,11 +989,9 @@ static void iavf_rx_checksum(struct iavf_vsi *vsi, u64 qword; qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len); - ptype = (qword & IAVF_RXD_QW1_PTYPE_MASK) >> IAVF_RXD_QW1_PTYPE_SHIFT; - rx_error = (qword & IAVF_RXD_QW1_ERROR_MASK) >> - IAVF_RXD_QW1_ERROR_SHIFT; - rx_status = (qword & IAVF_RXD_QW1_STATUS_MASK) >> - IAVF_RXD_QW1_STATUS_SHIFT; + ptype = FIELD_GET(IAVF_RXD_QW1_PTYPE_MASK, qword); + rx_error = FIELD_GET(IAVF_RXD_QW1_ERROR_MASK, qword); + rx_status = FIELD_GET(IAVF_RXD_QW1_STATUS_MASK, qword); decoded = decode_rx_desc_ptype(ptype); skb->ip_summed = CHECKSUM_NONE; @@ -1534,8 +1532,7 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) if (!iavf_test_staterr(rx_desc, IAVF_RXD_DD)) break; - size = (qword & IAVF_RXD_QW1_LENGTH_PBUF_MASK) >> - IAVF_RXD_QW1_LENGTH_PBUF_SHIFT; + size = FIELD_GET(IAVF_RXD_QW1_LENGTH_PBUF_MASK, qword); iavf_trace(clean_rx_irq, rx_ring, rx_desc, skb); rx_buffer = iavf_get_rx_buffer(rx_ring, size); @@ -1582,8 +1579,7 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) total_rx_bytes += skb->len; qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len); - rx_ptype = (qword & IAVF_RXD_QW1_PTYPE_MASK) >> - IAVF_RXD_QW1_PTYPE_SHIFT; + rx_ptype = FIELD_GET(IAVF_RXD_QW1_PTYPE_MASK, qword); /* populate checksum, VLAN, and protocol */ iavf_process_skb_fields(rx_ring, rx_desc, skb, rx_ptype); @@ -2291,8 +2287,7 @@ static void iavf_tx_map(struct iavf_ring *tx_ring, struct sk_buff *skb, if (tx_flags & IAVF_TX_FLAGS_HW_VLAN) { td_cmd |= IAVF_TX_DESC_CMD_IL2TAG1; - td_tag = (tx_flags & IAVF_TX_FLAGS_VLAN_MASK) >> - IAVF_TX_FLAGS_VLAN_SHIFT; + td_tag = FIELD_GET(IAVF_TX_FLAGS_VLAN_MASK, tx_flags); } first->tx_flags = tx_flags; @@ -2468,8 +2463,7 @@ static netdev_tx_t iavf_xmit_frame_ring(struct sk_buff *skb, if (tx_flags & IAVF_TX_FLAGS_HW_OUTER_SINGLE_VLAN) { cd_type_cmd_tso_mss |= IAVF_TX_CTX_DESC_IL2TAG2 << IAVF_TXD_CTX_QW1_CMD_SHIFT; - cd_l2tag2 = (tx_flags & IAVF_TX_FLAGS_VLAN_MASK) >> - IAVF_TX_FLAGS_VLAN_SHIFT; + cd_l2tag2 = FIELD_GET(IAVF_TX_FLAGS_VLAN_MASK, tx_flags); } /* obtain protocol of skb */