From patchwork Fri Jul 28 17:39:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Larysa Zaremba X-Patchwork-Id: 13332216 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 09DBB16425; Fri, 28 Jul 2023 17:44:33 +0000 (UTC) Received: from mgamail.intel.com (unknown [192.55.52.93]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E57F119B; Fri, 28 Jul 2023 10:44:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1690566272; x=1722102272; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Htin/oKTItDRe4AH6T9xQ8iZpF0S1nUu61um3cVKH8M=; b=We7sTrfhY2/YMDgUsTc8Lex3SJ6W1aTwzDMimi3PaGYGQkRyhQ+5fSQ/ 9xo4+66BRfbDyxtIH5gRiOb7T8Wc+0kQHe3iA6OkI6g9CLhkeIvT3SPD8 LOB20F6mhBlfdYCk4YnlGAlewB5FDuFspRpXE1ZOJqBuSmSR4ZN+E0TNM hGU4CVBUCQysktZ1hlcySWBB+b3/xfyREWs82VRHqasHHV3T8AqZ0Hhwz TbL97RFhsHFEME424hk8srMhSzc139cIsAN73oVfJn9/4MNgsFo/wZhd7 PyszwXqz3GWjyQM4MQ7EqWvSCSrlCpdZ2g+6FoisXNuOwjaU04pL64d40 g==; X-IronPort-AV: E=McAfee;i="6600,9927,10784"; a="366117274" X-IronPort-AV: E=Sophos;i="6.01,238,1684825200"; d="scan'208";a="366117274" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jul 2023 10:44:14 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10784"; a="851294566" X-IronPort-AV: E=Sophos;i="6.01,238,1684825200"; d="scan'208";a="851294566" Received: from irvmail002.ir.intel.com ([10.43.11.120]) by orsmga004.jf.intel.com with ESMTP; 28 Jul 2023 10:44:04 -0700 Received: from lincoln.igk.intel.com (lincoln.igk.intel.com [10.102.21.235]) by irvmail002.ir.intel.com (Postfix) with ESMTP id 7DDB137F55; Fri, 28 Jul 2023 18:44:01 +0100 (IST) From: Larysa Zaremba To: bpf@vger.kernel.org Cc: Larysa Zaremba , ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, David Ahern , Jakub Kicinski , Willem de Bruijn , Jesper Dangaard Brouer , Anatoly Burakov , Alexander Lobakin , Magnus Karlsson , Maryam Tahhan , xdp-hints@xdp-project.net, netdev@vger.kernel.org, Willem de Bruijn , Alexei Starovoitov , Simon Horman Subject: [PATCH bpf-next v4 01/21] ice: make RX hash reading code more reusable Date: Fri, 28 Jul 2023 19:39:03 +0200 Message-ID: <20230728173923.1318596-2-larysa.zaremba@intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230728173923.1318596-1-larysa.zaremba@intel.com> References: <20230728173923.1318596-1-larysa.zaremba@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net Previously, we only needed RX hash in skb path, hence all related code was written with skb in mind. But with the addition of XDP hints via kfuncs to the ice driver, the same logic will be needed in .xmo_() callbacks. Separate generic process of reading RX hash from a descriptor into a separate function. Signed-off-by: Larysa Zaremba --- drivers/net/ethernet/intel/ice/ice_txrx_lib.c | 37 +++++++++++++------ 1 file changed, 26 insertions(+), 11 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c index c8322fb6f2b3..8f7f6d78f7bf 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c @@ -63,28 +63,43 @@ static enum pkt_hash_types ice_ptype_to_htype(u16 ptype) } /** - * ice_rx_hash - set the hash value in the skb + * ice_get_rx_hash - get RX hash value from descriptor + * @rx_desc: specific descriptor + * + * Returns hash, if present, 0 otherwise. + */ +static u32 +ice_get_rx_hash(const union ice_32b_rx_flex_desc *rx_desc) +{ + const struct ice_32b_rx_flex_desc_nic *nic_mdid; + + if (rx_desc->wb.rxdid != ICE_RXDID_FLEX_NIC) + return 0; + + nic_mdid = (struct ice_32b_rx_flex_desc_nic *)rx_desc; + return le32_to_cpu(nic_mdid->rss_hash); +} + +/** + * ice_rx_hash_to_skb - set the hash value in the skb * @rx_ring: descriptor ring * @rx_desc: specific descriptor * @skb: pointer to current skb * @rx_ptype: the ptype value from the descriptor */ static void -ice_rx_hash(struct ice_rx_ring *rx_ring, union ice_32b_rx_flex_desc *rx_desc, - struct sk_buff *skb, u16 rx_ptype) +ice_rx_hash_to_skb(const struct ice_rx_ring *rx_ring, + const union ice_32b_rx_flex_desc *rx_desc, + struct sk_buff *skb, u16 rx_ptype) { - struct ice_32b_rx_flex_desc_nic *nic_mdid; u32 hash; if (!(rx_ring->netdev->features & NETIF_F_RXHASH)) return; - if (rx_desc->wb.rxdid != ICE_RXDID_FLEX_NIC) - return; - - nic_mdid = (struct ice_32b_rx_flex_desc_nic *)rx_desc; - hash = le32_to_cpu(nic_mdid->rss_hash); - skb_set_hash(skb, hash, ice_ptype_to_htype(rx_ptype)); + hash = ice_get_rx_hash(rx_desc); + if (likely(hash)) + skb_set_hash(skb, hash, ice_ptype_to_htype(rx_ptype)); } /** @@ -186,7 +201,7 @@ ice_process_skb_fields(struct ice_rx_ring *rx_ring, union ice_32b_rx_flex_desc *rx_desc, struct sk_buff *skb, u16 ptype) { - ice_rx_hash(rx_ring, rx_desc, skb, ptype); + ice_rx_hash_to_skb(rx_ring, rx_desc, skb, ptype); /* modifies the skb - consumes the enet header */ skb->protocol = eth_type_trans(skb, rx_ring->netdev);