From patchwork Tue Dec 5 21:08:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Larysa Zaremba X-Patchwork-Id: 13480735 X-Patchwork-Delegate: bpf@iogearbox.net Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="a2SdmY+3" Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7BE1418B; Tue, 5 Dec 2023 13:10:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1701810651; x=1733346651; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=MtfNuQzJhHMisdh8IhubWnqRUrtPIwL1ca134IOZfFo=; b=a2SdmY+3UGSiPSADH4hxw8F1H9Q7PbFPG7TZ/AmMa+9tbSnJ6CYriai8 LPt7sMjBE1fjY/y4DcqCxkYaP8wlMucLVhLra5tqkmiRforeHU3xU5+u+ j4KuUbHr2XUsJwBzRW1E+ky9fuLmvbYZAErby1OmkzOBWOTXTYB04GTiZ AlG6JmIynaEALrRvZMsRLEGtORkqNWzyBjFm4Vlf1keoV2XTmaixebP9G JVGBZLIdfUSg3n7mnfaBwajVEgaSF2m0vZn+QTGk/xuOi7PhSAQzadsyX NXpHRMihvF+A7jiit9fph3aPwOpQVYLvXG+iDbfxEHdiDyXzJJoBup/SF w==; X-IronPort-AV: E=McAfee;i="6600,9927,10915"; a="373421814" X-IronPort-AV: E=Sophos;i="6.04,253,1695711600"; d="scan'208";a="373421814" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Dec 2023 13:10:51 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10915"; a="774757602" X-IronPort-AV: E=Sophos;i="6.04,253,1695711600"; d="scan'208";a="774757602" Received: from irvmail002.ir.intel.com ([10.43.11.120]) by fmsmga007.fm.intel.com with ESMTP; 05 Dec 2023 13:10:45 -0800 Received: from lincoln.igk.intel.com (lincoln.igk.intel.com [10.102.21.235]) by irvmail002.ir.intel.com (Postfix) with ESMTP id DFB8334338; Tue, 5 Dec 2023 21:10:41 +0000 (GMT) From: Larysa Zaremba To: bpf@vger.kernel.org Cc: Larysa Zaremba , ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, David Ahern , Jakub Kicinski , Willem de Bruijn , Jesper Dangaard Brouer , Anatoly Burakov , Alexander Lobakin , Magnus Karlsson , Maryam Tahhan , xdp-hints@xdp-project.net, netdev@vger.kernel.org, Willem de Bruijn , Alexei Starovoitov , Tariq Toukan , Saeed Mahameed , Maciej Fijalkowski Subject: [PATCH bpf-next v8 01/18] ice: make RX hash reading code more reusable Date: Tue, 5 Dec 2023 22:08:30 +0100 Message-ID: <20231205210847.28460-2-larysa.zaremba@intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231205210847.28460-1-larysa.zaremba@intel.com> References: <20231205210847.28460-1-larysa.zaremba@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Previously, we only needed RX hash in skb path, hence all related code was written with skb in mind. But with the addition of XDP hints via kfuncs to the ice driver, the same logic will be needed in .xmo_() callbacks. Separate generic process of reading RX hash from a descriptor into a separate function. Reviewed-by: Maciej Fijalkowski Signed-off-by: Larysa Zaremba --- drivers/net/ethernet/intel/ice/ice_txrx_lib.c | 36 +++++++++++++------ 1 file changed, 25 insertions(+), 11 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c index 7e06373e14d9..17530359aaf8 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c @@ -63,28 +63,42 @@ static enum pkt_hash_types ice_ptype_to_htype(u16 ptype) } /** - * ice_rx_hash - set the hash value in the skb + * ice_get_rx_hash - get RX hash value from descriptor + * @rx_desc: specific descriptor + * + * Returns hash, if present, 0 otherwise. + */ +static u32 ice_get_rx_hash(const union ice_32b_rx_flex_desc *rx_desc) +{ + const struct ice_32b_rx_flex_desc_nic *nic_mdid; + + if (unlikely(rx_desc->wb.rxdid != ICE_RXDID_FLEX_NIC)) + return 0; + + nic_mdid = (struct ice_32b_rx_flex_desc_nic *)rx_desc; + return le32_to_cpu(nic_mdid->rss_hash); +} + +/** + * ice_rx_hash_to_skb - set the hash value in the skb * @rx_ring: descriptor ring * @rx_desc: specific descriptor * @skb: pointer to current skb * @rx_ptype: the ptype value from the descriptor */ static void -ice_rx_hash(struct ice_rx_ring *rx_ring, union ice_32b_rx_flex_desc *rx_desc, - struct sk_buff *skb, u16 rx_ptype) +ice_rx_hash_to_skb(const struct ice_rx_ring *rx_ring, + const union ice_32b_rx_flex_desc *rx_desc, + struct sk_buff *skb, u16 rx_ptype) { - struct ice_32b_rx_flex_desc_nic *nic_mdid; u32 hash; if (!(rx_ring->netdev->features & NETIF_F_RXHASH)) return; - if (rx_desc->wb.rxdid != ICE_RXDID_FLEX_NIC) - return; - - nic_mdid = (struct ice_32b_rx_flex_desc_nic *)rx_desc; - hash = le32_to_cpu(nic_mdid->rss_hash); - skb_set_hash(skb, hash, ice_ptype_to_htype(rx_ptype)); + hash = ice_get_rx_hash(rx_desc); + if (likely(hash)) + skb_set_hash(skb, hash, ice_ptype_to_htype(rx_ptype)); } /** @@ -186,7 +200,7 @@ ice_process_skb_fields(struct ice_rx_ring *rx_ring, union ice_32b_rx_flex_desc *rx_desc, struct sk_buff *skb, u16 ptype) { - ice_rx_hash(rx_ring, rx_desc, skb, ptype); + ice_rx_hash_to_skb(rx_ring, rx_desc, skb, ptype); /* modifies the skb - consumes the enet header */ skb->protocol = eth_type_trans(skb, rx_ring->netdev);