From patchwork Tue Apr 15 17:28:10 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 14052487 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EBB0F21D594; Tue, 15 Apr 2025 17:28:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.17 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744738140; cv=none; b=jBIUAaArxgC/fuBTWvttJ5YbYlk0XWkX34lDR9U/xPHdhgCk+DBavF1OHmsLofVnVjJMWWZVk3gvXg7AUjredb/KwMVeIM8uCiYioyvOP1EzSKlyFpTKOULdhqcFa+LhSP63IP70M1oo3rruBojLynT4+Q5fIlX4OT1GtQLemmU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744738140; c=relaxed/simple; bh=J15pFQWfMktbS1e5HIo5JingsYHiRjmh+zigBRXJAHg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ntzXPg1gSOnevIVoypvKW/J0wls6KvCE+2Na//Abv2WFgnQT7XdW2JLhEsEO7eSeKvYDtri4IJoYJMr+ILydfTWEPvNlqMpexWvWT15ysPG3eou+O3hNgGITaxXoV2lA+odfaxzO+ZpTSLMGS/vH5XhwAc+cdZIwx7MskRo7OAI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=ZgWvUo6z; arc=none smtp.client-ip=198.175.65.17 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="ZgWvUo6z" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1744738138; x=1776274138; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=J15pFQWfMktbS1e5HIo5JingsYHiRjmh+zigBRXJAHg=; b=ZgWvUo6zbO8/I6OC1vjg+IcWfEJwKRdLN6MnMaB2+TC2LYJU9EJ4b4cv CvEZQqxwJDA4tIFD1sahG8F/54mSmiSo77ylVcwvSWBWrjCwJKJJhUdTP bh6zaGyxhPQ5Rzt+MQExYQ0lW6axor0sH72v5vjhGJoYinFDJyW668wTc HlnuhU5NZUVXugT4CFChN8GbsVVCf055C65W9sZh4Dyvuy2HRa3+BGdK5 MxeuParH+nE14Lk7RjIzfExeIg02O4WNHaholxZAUucm2GubkngHoemzH C8fTJcW37baVF7Gp+LWFFs8MUkzBK9mrK5t8gZwJJ7p3Le+tFIXufdy1n g==; X-CSE-ConnectionGUID: zlsEDk+AS1aQfQAVWw87nw== X-CSE-MsgGUID: wYGDrGPwQWWqf1PnKIbS+g== X-IronPort-AV: E=McAfee;i="6700,10204,11404"; a="46275594" X-IronPort-AV: E=Sophos;i="6.15,213,1739865600"; d="scan'208";a="46275594" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by orvoesa109.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Apr 2025 10:28:58 -0700 X-CSE-ConnectionGUID: yWUKkOKmTAi1UhbaGDVVHg== X-CSE-MsgGUID: HLwyN73sTTGUcqVGHQm8Ag== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,213,1739865600"; d="scan'208";a="130729630" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa010.fm.intel.com with ESMTP; 15 Apr 2025 10:28:53 -0700 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Cc: Alexander Lobakin , Michal Kubiak , Maciej Fijalkowski , Tony Nguyen , Przemek Kitszel , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Simon Horman , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Mina Almasry Subject: [PATCH iwl-next 01/16] libeth: convert to netmem Date: Tue, 15 Apr 2025 19:28:10 +0200 Message-ID: <20250415172825.3731091-2-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250415172825.3731091-1-aleksander.lobakin@intel.com> References: <20250415172825.3731091-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Back when the libeth Rx core was initially written, devmem was a draft and netmem_ref didn't exist in the mainline. Now that it's here, make libeth MP-agnostic before introducing any new code or any new library users. When it's known that the created PP/FQ is for header buffers, use faster "unsafe" underscored netmem <--> virt accessors as netmem_is_net_iov() is always false in that case, but consumes some cycles (bit test + true branch). Misc: replace explicit EXPORT_SYMBOL_NS_GPL("NS") with DEFAULT_SYMBOL_NAMESPACE. Reviewed-by: Mina Almasry Signed-off-by: Alexander Lobakin --- include/net/libeth/rx.h | 22 +++++++------ drivers/net/ethernet/intel/iavf/iavf_txrx.c | 14 ++++---- .../ethernet/intel/idpf/idpf_singleq_txrx.c | 2 +- drivers/net/ethernet/intel/idpf/idpf_txrx.c | 33 +++++++++++-------- drivers/net/ethernet/intel/libeth/rx.c | 20 ++++++----- 5 files changed, 51 insertions(+), 40 deletions(-) diff --git a/include/net/libeth/rx.h b/include/net/libeth/rx.h index ab05024be518..7d5dc58984b1 100644 --- a/include/net/libeth/rx.h +++ b/include/net/libeth/rx.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0-only */ -/* Copyright (C) 2024 Intel Corporation */ +/* Copyright (C) 2024-2025 Intel Corporation */ #ifndef __LIBETH_RX_H #define __LIBETH_RX_H @@ -31,7 +31,7 @@ /** * struct libeth_fqe - structure representing an Rx buffer (fill queue element) - * @page: page holding the buffer + * @netmem: network memory reference holding the buffer * @offset: offset from the page start (to the headroom) * @truesize: total space occupied by the buffer (w/ headroom and tailroom) * @@ -40,7 +40,7 @@ * former, @offset is always 0 and @truesize is always ```PAGE_SIZE```. */ struct libeth_fqe { - struct page *page; + netmem_ref netmem; u32 offset; u32 truesize; } __aligned_largest; @@ -102,15 +102,16 @@ static inline dma_addr_t libeth_rx_alloc(const struct libeth_fq_fp *fq, u32 i) struct libeth_fqe *buf = &fq->fqes[i]; buf->truesize = fq->truesize; - buf->page = page_pool_dev_alloc(fq->pp, &buf->offset, &buf->truesize); - if (unlikely(!buf->page)) + buf->netmem = page_pool_dev_alloc_netmem(fq->pp, &buf->offset, + &buf->truesize); + if (unlikely(!buf->netmem)) return DMA_MAPPING_ERROR; - return page_pool_get_dma_addr(buf->page) + buf->offset + + return page_pool_get_dma_addr_netmem(buf->netmem) + buf->offset + fq->pp->p.offset; } -void libeth_rx_recycle_slow(struct page *page); +void libeth_rx_recycle_slow(netmem_ref netmem); /** * libeth_rx_sync_for_cpu - synchronize or recycle buffer post DMA @@ -126,18 +127,19 @@ void libeth_rx_recycle_slow(struct page *page); static inline bool libeth_rx_sync_for_cpu(const struct libeth_fqe *fqe, u32 len) { - struct page *page = fqe->page; + netmem_ref netmem = fqe->netmem; /* Very rare, but possible case. The most common reason: * the last fragment contained FCS only, which was then * stripped by the HW. */ if (unlikely(!len)) { - libeth_rx_recycle_slow(page); + libeth_rx_recycle_slow(netmem); return false; } - page_pool_dma_sync_for_cpu(page->pp, page, fqe->offset, len); + page_pool_dma_sync_netmem_for_cpu(netmem_get_pp(netmem), netmem, + fqe->offset, len); return true; } diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c index 422312b8b54a..35d353d38129 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c @@ -723,7 +723,7 @@ static void iavf_clean_rx_ring(struct iavf_ring *rx_ring) for (u32 i = rx_ring->next_to_clean; i != rx_ring->next_to_use; ) { const struct libeth_fqe *rx_fqes = &rx_ring->rx_fqes[i]; - page_pool_put_full_page(rx_ring->pp, rx_fqes->page, false); + libeth_rx_recycle_slow(rx_fqes->netmem); if (unlikely(++i == rx_ring->count)) i = 0; @@ -1197,10 +1197,11 @@ static void iavf_add_rx_frag(struct sk_buff *skb, const struct libeth_fqe *rx_buffer, unsigned int size) { - u32 hr = rx_buffer->page->pp->p.offset; + u32 hr = netmem_get_pp(rx_buffer->netmem)->p.offset; - skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buffer->page, - rx_buffer->offset + hr, size, rx_buffer->truesize); + skb_add_rx_frag_netmem(skb, skb_shinfo(skb)->nr_frags, + rx_buffer->netmem, rx_buffer->offset + hr, + size, rx_buffer->truesize); } /** @@ -1214,12 +1215,13 @@ static void iavf_add_rx_frag(struct sk_buff *skb, static struct sk_buff *iavf_build_skb(const struct libeth_fqe *rx_buffer, unsigned int size) { - u32 hr = rx_buffer->page->pp->p.offset; + struct page *buf_page = __netmem_to_page(rx_buffer->netmem); + u32 hr = buf_page->pp->p.offset; struct sk_buff *skb; void *va; /* prefetch first cache line of first page */ - va = page_address(rx_buffer->page) + rx_buffer->offset; + va = page_address(buf_page) + rx_buffer->offset; net_prefetch(va + hr); /* build an skb around the page buffer */ diff --git a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c index eae1b6f474e6..aeb2ca5f5a0a 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c @@ -1009,7 +1009,7 @@ static int idpf_rx_singleq_clean(struct idpf_rx_queue *rx_q, int budget) break; skip_data: - rx_buf->page = NULL; + rx_buf->netmem = 0; IDPF_SINGLEQ_BUMP_RING_IDX(rx_q, ntc); cleaned_count++; diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c index bdf52cef3891..6254806c2072 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -382,12 +382,12 @@ static int idpf_tx_desc_alloc_all(struct idpf_vport *vport) */ static void idpf_rx_page_rel(struct libeth_fqe *rx_buf) { - if (unlikely(!rx_buf->page)) + if (unlikely(!rx_buf->netmem)) return; - page_pool_put_full_page(rx_buf->page->pp, rx_buf->page, false); + libeth_rx_recycle_slow(rx_buf->netmem); - rx_buf->page = NULL; + rx_buf->netmem = 0; rx_buf->offset = 0; } @@ -3096,10 +3096,10 @@ idpf_rx_process_skb_fields(struct idpf_rx_queue *rxq, struct sk_buff *skb, void idpf_rx_add_frag(struct idpf_rx_buf *rx_buf, struct sk_buff *skb, unsigned int size) { - u32 hr = rx_buf->page->pp->p.offset; + u32 hr = netmem_get_pp(rx_buf->netmem)->p.offset; - skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buf->page, - rx_buf->offset + hr, size, rx_buf->truesize); + skb_add_rx_frag_netmem(skb, skb_shinfo(skb)->nr_frags, rx_buf->netmem, + rx_buf->offset + hr, size, rx_buf->truesize); } /** @@ -3122,16 +3122,20 @@ static u32 idpf_rx_hsplit_wa(const struct libeth_fqe *hdr, struct libeth_fqe *buf, u32 data_len) { u32 copy = data_len <= L1_CACHE_BYTES ? data_len : ETH_HLEN; + struct page *hdr_page, *buf_page; const void *src; void *dst; - if (!libeth_rx_sync_for_cpu(buf, copy)) + if (unlikely(netmem_is_net_iov(buf->netmem)) || + !libeth_rx_sync_for_cpu(buf, copy)) return 0; - dst = page_address(hdr->page) + hdr->offset + hdr->page->pp->p.offset; - src = page_address(buf->page) + buf->offset + buf->page->pp->p.offset; - memcpy(dst, src, LARGEST_ALIGN(copy)); + hdr_page = __netmem_to_page(hdr->netmem); + buf_page = __netmem_to_page(buf->netmem); + dst = page_address(hdr_page) + hdr->offset + hdr_page->pp->p.offset; + src = page_address(buf_page) + buf->offset + buf_page->pp->p.offset; + memcpy(dst, src, LARGEST_ALIGN(copy)); buf->offset += copy; return copy; @@ -3147,11 +3151,12 @@ static u32 idpf_rx_hsplit_wa(const struct libeth_fqe *hdr, */ struct sk_buff *idpf_rx_build_skb(const struct libeth_fqe *buf, u32 size) { - u32 hr = buf->page->pp->p.offset; + struct page *buf_page = __netmem_to_page(buf->netmem); + u32 hr = buf_page->pp->p.offset; struct sk_buff *skb; void *va; - va = page_address(buf->page) + buf->offset; + va = page_address(buf_page) + buf->offset; prefetch(va + hr); skb = napi_build_skb(va, buf->truesize); @@ -3302,7 +3307,7 @@ static int idpf_rx_splitq_clean(struct idpf_rx_queue *rxq, int budget) u64_stats_update_end(&rxq->stats_sync); } - hdr->page = NULL; + hdr->netmem = 0; payload: if (!libeth_rx_sync_for_cpu(rx_buf, pkt_len)) @@ -3318,7 +3323,7 @@ static int idpf_rx_splitq_clean(struct idpf_rx_queue *rxq, int budget) break; skip_data: - rx_buf->page = NULL; + rx_buf->netmem = 0; idpf_rx_post_buf_refill(refillq, buf_id); IDPF_RX_BUMP_NTC(rxq, ntc); diff --git a/drivers/net/ethernet/intel/libeth/rx.c b/drivers/net/ethernet/intel/libeth/rx.c index 66d1d23b8ad2..aa5d878181f7 100644 --- a/drivers/net/ethernet/intel/libeth/rx.c +++ b/drivers/net/ethernet/intel/libeth/rx.c @@ -1,5 +1,7 @@ // SPDX-License-Identifier: GPL-2.0-only -/* Copyright (C) 2024 Intel Corporation */ +/* Copyright (C) 2024-2025 Intel Corporation */ + +#define DEFAULT_SYMBOL_NAMESPACE "LIBETH" #include @@ -186,7 +188,7 @@ int libeth_rx_fq_create(struct libeth_fq *fq, struct napi_struct *napi) return -ENOMEM; } -EXPORT_SYMBOL_NS_GPL(libeth_rx_fq_create, "LIBETH"); +EXPORT_SYMBOL_GPL(libeth_rx_fq_create); /** * libeth_rx_fq_destroy - destroy a &page_pool created by libeth @@ -197,19 +199,19 @@ void libeth_rx_fq_destroy(struct libeth_fq *fq) kvfree(fq->fqes); page_pool_destroy(fq->pp); } -EXPORT_SYMBOL_NS_GPL(libeth_rx_fq_destroy, "LIBETH"); +EXPORT_SYMBOL_GPL(libeth_rx_fq_destroy); /** - * libeth_rx_recycle_slow - recycle a libeth page from the NAPI context - * @page: page to recycle + * libeth_rx_recycle_slow - recycle libeth netmem + * @netmem: network memory to recycle * * To be used on exceptions or rare cases not requiring fast inline recycling. */ -void libeth_rx_recycle_slow(struct page *page) +void __cold libeth_rx_recycle_slow(netmem_ref netmem) { - page_pool_recycle_direct(page->pp, page); + page_pool_put_full_netmem(netmem_get_pp(netmem), netmem, false); } -EXPORT_SYMBOL_NS_GPL(libeth_rx_recycle_slow, "LIBETH"); +EXPORT_SYMBOL_GPL(libeth_rx_recycle_slow); /* Converting abstract packet type numbers into a software structure with * the packet parameters to do O(1) lookup on Rx. @@ -251,7 +253,7 @@ void libeth_rx_pt_gen_hash_type(struct libeth_rx_pt *pt) pt->hash_type |= libeth_rx_pt_xdp_iprot[pt->inner_prot]; pt->hash_type |= libeth_rx_pt_xdp_pl[pt->payload_layer]; } -EXPORT_SYMBOL_NS_GPL(libeth_rx_pt_gen_hash_type, "LIBETH"); +EXPORT_SYMBOL_GPL(libeth_rx_pt_gen_hash_type); /* Module */ From patchwork Tue Apr 15 17:28:11 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 14052488 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E5DD0221577; Tue, 15 Apr 2025 17:29:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.17 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744738143; cv=none; b=S+HWq0yvbbYjkeMeWnoI+jqlvKfFUS73BNwLzIgmWBZh/lNpm7LhMUunJjQDwvH+cc3tQ4dV+xnfFe+HITKNlHVLIt0uVE9EqWfHgiUlgJCv2I3h+a1qI9+N/bCqXFeYiKcRdSQUppQq2HtP6DwJq2iw7X+4ehxOcpAHH6ctdds= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744738143; c=relaxed/simple; bh=Him4JK+FJftB/n6P7c5Iyd2TEDiWgRHDCdeuF/oYQK0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=TvaSvaNK4W+GfwMHEWY5UBc59fTgDssgStuNaCbo1hKBU+SnoYaktqQldpoEOb1pXagvH3zdIXc9Adp7hTiWFVNimTwm2tV04KmQxEfsHxu17KlJ8myJpnl9X0Z9GaszMGvsDBOtpiv2WX43sX/CAsbB09g40C5vQkqaafCCuQQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=MjP8f1/e; arc=none smtp.client-ip=198.175.65.17 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="MjP8f1/e" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1744738142; x=1776274142; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Him4JK+FJftB/n6P7c5Iyd2TEDiWgRHDCdeuF/oYQK0=; b=MjP8f1/eS8efthlI9w+2FiojPeEjAt9E3DnwQt2pukoNnfSTzWEtQ2VW 7x9V8+2BHWlUWmTk+E9FJFSCgrBqIljLCEsvyNTQLfoiygsEugTpx3DjY TRiISuYiNU8Gi5xS6ElqweVj6rFqlLizn7Sap8d5a1eWltu+gcbYFljkE XG9xDswyC1LOiHXuLVCtCN0o4cPOiXCqptocKlGPoatwI7r0nsxb3qJQZ nr7DtVRwAdNH98raBPMkFjFrymuzaNDBqz/hQ+6BCfUUmdlkLXqZScusI D+LrAZSDXsnCV9k2L5l6b2r7+X38oINkYVIniSf7LJy+hAbUrfIzniOy/ A==; X-CSE-ConnectionGUID: r7sNgJ02SMe/JCCnQo8I3A== X-CSE-MsgGUID: n1J94QXQS4OYa30w+BCV9Q== X-IronPort-AV: E=McAfee;i="6700,10204,11404"; a="46275620" X-IronPort-AV: E=Sophos;i="6.15,213,1739865600"; d="scan'208";a="46275620" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by orvoesa109.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Apr 2025 10:29:02 -0700 X-CSE-ConnectionGUID: PfLsZR+lRWi6TF6nxdLEUQ== X-CSE-MsgGUID: /iM9akVTRKa82X5ywH23LQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,213,1739865600"; d="scan'208";a="130729635" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa010.fm.intel.com with ESMTP; 15 Apr 2025 10:28:58 -0700 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Cc: Alexander Lobakin , Michal Kubiak , Maciej Fijalkowski , Tony Nguyen , Przemek Kitszel , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Simon Horman , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH iwl-next 02/16] libeth: support native XDP and register memory model Date: Tue, 15 Apr 2025 19:28:11 +0200 Message-ID: <20250415172825.3731091-3-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250415172825.3731091-1-aleksander.lobakin@intel.com> References: <20250415172825.3731091-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Expand libeth's Page Pool functionality by adding native XDP support. This means picking the appropriate headroom and DMA direction. Also, register all the created &page_pools as XDP memory models. A driver then can call xdp_rxq_info_attach_page_pool() when registering its RxQ info. Signed-off-by: Alexander Lobakin --- include/net/libeth/rx.h | 6 +++++- drivers/net/ethernet/intel/libeth/rx.c | 20 +++++++++++++++----- 2 files changed, 20 insertions(+), 6 deletions(-) diff --git a/include/net/libeth/rx.h b/include/net/libeth/rx.h index 7d5dc58984b1..5d991404845e 100644 --- a/include/net/libeth/rx.h +++ b/include/net/libeth/rx.h @@ -13,8 +13,10 @@ /* Space reserved in front of each frame */ #define LIBETH_SKB_HEADROOM (NET_SKB_PAD + NET_IP_ALIGN) +#define LIBETH_XDP_HEADROOM (ALIGN(XDP_PACKET_HEADROOM, NET_SKB_PAD) + \ + NET_IP_ALIGN) /* Maximum headroom for worst-case calculations */ -#define LIBETH_MAX_HEADROOM LIBETH_SKB_HEADROOM +#define LIBETH_MAX_HEADROOM LIBETH_XDP_HEADROOM /* Link layer / L2 overhead: Ethernet, 2 VLAN tags (C + S), FCS */ #define LIBETH_RX_LL_LEN (ETH_HLEN + 2 * VLAN_HLEN + ETH_FCS_LEN) /* Maximum supported L2-L4 header length */ @@ -66,6 +68,7 @@ enum libeth_fqe_type { * @count: number of descriptors/buffers the queue has * @type: type of the buffers this queue has * @hsplit: flag whether header split is enabled + * @xdp: flag indicating whether XDP is enabled * @buf_len: HW-writeable length per each buffer * @nid: ID of the closest NUMA node with memory */ @@ -81,6 +84,7 @@ struct libeth_fq { /* Cold fields */ enum libeth_fqe_type type:2; bool hsplit:1; + bool xdp:1; u32 buf_len; int nid; diff --git a/drivers/net/ethernet/intel/libeth/rx.c b/drivers/net/ethernet/intel/libeth/rx.c index aa5d878181f7..c0be9cb043a1 100644 --- a/drivers/net/ethernet/intel/libeth/rx.c +++ b/drivers/net/ethernet/intel/libeth/rx.c @@ -70,7 +70,7 @@ static u32 libeth_rx_hw_len_truesize(const struct page_pool_params *pp, static bool libeth_rx_page_pool_params(struct libeth_fq *fq, struct page_pool_params *pp) { - pp->offset = LIBETH_SKB_HEADROOM; + pp->offset = fq->xdp ? LIBETH_XDP_HEADROOM : LIBETH_SKB_HEADROOM; /* HW-writeable / syncable length per one page */ pp->max_len = LIBETH_RX_PAGE_LEN(pp->offset); @@ -157,11 +157,12 @@ int libeth_rx_fq_create(struct libeth_fq *fq, struct napi_struct *napi) .dev = napi->dev->dev.parent, .netdev = napi->dev, .napi = napi, - .dma_dir = DMA_FROM_DEVICE, }; struct libeth_fqe *fqes; struct page_pool *pool; - bool ret; + int ret; + + pp.dma_dir = fq->xdp ? DMA_BIDIRECTIONAL : DMA_FROM_DEVICE; if (!fq->hsplit) ret = libeth_rx_page_pool_params(fq, &pp); @@ -175,18 +176,26 @@ int libeth_rx_fq_create(struct libeth_fq *fq, struct napi_struct *napi) return PTR_ERR(pool); fqes = kvcalloc_node(fq->count, sizeof(*fqes), GFP_KERNEL, fq->nid); - if (!fqes) + if (!fqes) { + ret = -ENOMEM; goto err_buf; + } + + ret = xdp_reg_page_pool(pool); + if (ret) + goto err_mem; fq->fqes = fqes; fq->pp = pool; return 0; +err_mem: + kvfree(fqes); err_buf: page_pool_destroy(pool); - return -ENOMEM; + return ret; } EXPORT_SYMBOL_GPL(libeth_rx_fq_create); @@ -196,6 +205,7 @@ EXPORT_SYMBOL_GPL(libeth_rx_fq_create); */ void libeth_rx_fq_destroy(struct libeth_fq *fq) { + xdp_unreg_page_pool(fq->pp); kvfree(fq->fqes); page_pool_destroy(fq->pp); } From patchwork Tue Apr 15 17:28:12 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 14052489 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A23A1222568; Tue, 15 Apr 2025 17:29:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.17 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744738148; cv=none; b=rc+rXtMWdkFjHBxoYOv+ddByDE1DXEYWeSreCfJE4wFVovdrK7fXuiL7gf5oxW+ssUXp3A5u9nAyPndBtauV67V93gtsA9jmIZXtOHTNE3EAJUXq2hNA9yKX1Cf+6iSgtgLHHDjqr9lL6iDH+YWoBxI1z45OwdALc6NFj3ImSP0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744738148; c=relaxed/simple; bh=Tbfy6NzEBUA/+q4y2eAAZ9L5xf9T+4HkL8CdE1ubETc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=fFwUC1W1f9YNhj70b23mf4DrSzL1AhrcMUX36dT2V0ZjI/jrQeDGqv+6taLWLHJCkSGYujV9o5BxgK6wLnCtXo48yII/OLeSI0/ayGKGq/XAwYnnyPwwqOWNB6p6fXuGQvpqtC/a2G02RVeFVpIEInb0EIKhXI9BhKlQ7+/ile0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=lYE3bt+y; arc=none smtp.client-ip=198.175.65.17 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="lYE3bt+y" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1744738147; x=1776274147; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Tbfy6NzEBUA/+q4y2eAAZ9L5xf9T+4HkL8CdE1ubETc=; b=lYE3bt+yxh5Lm4jXw45PhpHlCD4E5f3aIhuozpx+q2GIm2yAJYtSw2LC S0bXZmm41cJwi65N1g58/X8yDriVnP7BERhhjYZ0yGbi9g4conQdj/Tgt JQR60MxovwSdt1KkbWwp7Me3w3GNJxtUeBlQebNR8J0+bvqp3rb9fetGg 2tNgwgqm4X1ee5fgcuRiyQl0bXBYHp7thM/P0i2pMpCFJ4o6JJ2lHNLbv 2p9o+u3nU/UDZZT/miLSammH1IedFWKmz1/ppVnbIAjZT9Uw689TZ2t0W knbzZxonQZUS3aA/JEcI+vyPcRx96l1I4YIZivWwFhCb8jSpwJfgzbt5x g==; X-CSE-ConnectionGUID: wA9Qk4iYQKKqo6BTixpDdQ== X-CSE-MsgGUID: dZhji4clTXe4us3kTNVgWQ== X-IronPort-AV: E=McAfee;i="6700,10204,11404"; a="46275637" X-IronPort-AV: E=Sophos;i="6.15,213,1739865600"; d="scan'208";a="46275637" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by orvoesa109.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Apr 2025 10:29:06 -0700 X-CSE-ConnectionGUID: GOl3FXPGRymKRJa+sUZBBw== X-CSE-MsgGUID: pehRkkjuSCuXIWFuZrV6Jw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,213,1739865600"; d="scan'208";a="130729639" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa010.fm.intel.com with ESMTP; 15 Apr 2025 10:29:02 -0700 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Cc: Alexander Lobakin , Michal Kubiak , Maciej Fijalkowski , Tony Nguyen , Przemek Kitszel , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Simon Horman , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH iwl-next 03/16] libeth: xdp: add XDP_TX buffers sending Date: Tue, 15 Apr 2025 19:28:12 +0200 Message-ID: <20250415172825.3731091-4-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250415172825.3731091-1-aleksander.lobakin@intel.com> References: <20250415172825.3731091-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Start adding XDP-specific code to libeth, namely handling XDP_TX buffers (only sending). The idea is that we accumulate up to 16 buffers on the stack, then, if either the limit is reached or the polling is finished, flush them at once with only one XDPSQ cleaning (if needed). The main sending function will be aware of the sending budget and already have all the info to send the buffers, so it can't fail. Drivers need to provide 2 inline callbacks to the main sending function: for cleaning an XDPSQ and for filling descriptors; the library code takes care of the rest. Note that unlike the generic code, multi-buffer support is not wrapped here with unlikely() to not hurt header split setups. &libeth_xdp_buff is a simple extension over &xdp_buff which has a direct pointer to the corresponding Rx descriptor (and, luckily, precisely 1 CL size and 16-byte alignment on x86_64). Suggested-by: Maciej Fijalkowski # xmit logic Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/libeth/Kconfig | 10 +- drivers/net/ethernet/intel/libeth/Makefile | 6 +- include/net/libeth/tx.h | 11 +- include/net/libeth/xdp.h | 534 +++++++++++++++++++++ drivers/net/ethernet/intel/libeth/xdp.c | 87 ++++ 5 files changed, 643 insertions(+), 5 deletions(-) create mode 100644 include/net/libeth/xdp.h create mode 100644 drivers/net/ethernet/intel/libeth/xdp.c diff --git a/drivers/net/ethernet/intel/libeth/Kconfig b/drivers/net/ethernet/intel/libeth/Kconfig index 480293b71dbc..d8c4926574fb 100644 --- a/drivers/net/ethernet/intel/libeth/Kconfig +++ b/drivers/net/ethernet/intel/libeth/Kconfig @@ -1,9 +1,15 @@ # SPDX-License-Identifier: GPL-2.0-only -# Copyright (C) 2024 Intel Corporation +# Copyright (C) 2024-2025 Intel Corporation config LIBETH - tristate + tristate "Common Ethernet library (libeth)" if COMPILE_TEST select PAGE_POOL help libeth is a common library containing routines shared between several drivers, but not yet promoted to the generic kernel API. + +config LIBETH_XDP + tristate "Common XDP library (libeth_xdp)" if COMPILE_TEST + select LIBETH + help + XDP helpers based on libeth hotpath management. diff --git a/drivers/net/ethernet/intel/libeth/Makefile b/drivers/net/ethernet/intel/libeth/Makefile index 52492b081132..9ba78f463f2e 100644 --- a/drivers/net/ethernet/intel/libeth/Makefile +++ b/drivers/net/ethernet/intel/libeth/Makefile @@ -1,6 +1,10 @@ # SPDX-License-Identifier: GPL-2.0-only -# Copyright (C) 2024 Intel Corporation +# Copyright (C) 2024-2025 Intel Corporation obj-$(CONFIG_LIBETH) += libeth.o libeth-y := rx.o + +obj-$(CONFIG_LIBETH_XDP) += libeth_xdp.o + +libeth_xdp-y += xdp.o diff --git a/include/net/libeth/tx.h b/include/net/libeth/tx.h index 35614f9523f6..3e68d11914f7 100644 --- a/include/net/libeth/tx.h +++ b/include/net/libeth/tx.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0-only */ -/* Copyright (C) 2024 Intel Corporation */ +/* Copyright (C) 2024-2025 Intel Corporation */ #ifndef __LIBETH_TX_H #define __LIBETH_TX_H @@ -12,11 +12,13 @@ /** * enum libeth_sqe_type - type of &libeth_sqe to act on Tx completion - * @LIBETH_SQE_EMPTY: unused/empty, no action required + * @LIBETH_SQE_EMPTY: unused/empty OR XDP_TX frag, no action required * @LIBETH_SQE_CTX: context descriptor with empty SQE, no action required * @LIBETH_SQE_SLAB: kmalloc-allocated buffer, unmap and kfree() * @LIBETH_SQE_FRAG: mapped skb frag, only unmap DMA * @LIBETH_SQE_SKB: &sk_buff, unmap and napi_consume_skb(), update stats + * @__LIBETH_SQE_XDP_START: separator between skb and XDP types + * @LIBETH_SQE_XDP_TX: &skb_shared_info, libeth_xdp_return_buff_bulk(), stats */ enum libeth_sqe_type { LIBETH_SQE_EMPTY = 0U, @@ -24,6 +26,9 @@ enum libeth_sqe_type { LIBETH_SQE_SLAB, LIBETH_SQE_FRAG, LIBETH_SQE_SKB, + + __LIBETH_SQE_XDP_START, + LIBETH_SQE_XDP_TX = __LIBETH_SQE_XDP_START, }; /** @@ -32,6 +37,7 @@ enum libeth_sqe_type { * @rs_idx: index of the last buffer from the batch this one was sent in * @raw: slab buffer to free via kfree() * @skb: &sk_buff to consume + * @sinfo: skb shared info of an XDP_TX frame * @dma: DMA address to unmap * @len: length of the mapped region to unmap * @nr_frags: number of frags in the frame this buffer belongs to @@ -46,6 +52,7 @@ struct libeth_sqe { union { void *raw; struct sk_buff *skb; + struct skb_shared_info *sinfo; }; DEFINE_DMA_UNMAP_ADDR(dma); diff --git a/include/net/libeth/xdp.h b/include/net/libeth/xdp.h new file mode 100644 index 000000000000..946fc8081987 --- /dev/null +++ b/include/net/libeth/xdp.h @@ -0,0 +1,534 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2025 Intel Corporation */ + +#ifndef __LIBETH_XDP_H +#define __LIBETH_XDP_H + +#include +#include + +#include +#include +#include + +/* + * &xdp_buff_xsk is the largest structure &libeth_xdp_buff gets casted to, + * pick maximum pointer-compatible alignment. + */ +#define __LIBETH_XDP_BUFF_ALIGN \ + (IS_ALIGNED(sizeof(struct xdp_buff_xsk), 16) ? 16 : \ + IS_ALIGNED(sizeof(struct xdp_buff_xsk), 8) ? 8 : \ + sizeof(long)) + +/** + * struct libeth_xdp_buff - libeth extension over &xdp_buff + * @base: main &xdp_buff + * @data: shortcut for @base.data + * @desc: RQ descriptor containing metadata for this buffer + * @priv: driver-private scratchspace + * + * The main reason for this is to have a pointer to the descriptor to be able + * to quickly get frame metadata from xdpmo and driver buff-to-xdp callbacks + * (as well as bigger alignment). + * Pointer/layout-compatible with &xdp_buff and &xdp_buff_xsk. + */ +struct libeth_xdp_buff { + union { + struct xdp_buff base; + void *data; + }; + + const void *desc; + unsigned long priv[] + __aligned(__LIBETH_XDP_BUFF_ALIGN); +} __aligned(__LIBETH_XDP_BUFF_ALIGN); +static_assert(offsetof(struct libeth_xdp_buff, data) == + offsetof(struct xdp_buff_xsk, xdp.data)); +static_assert(offsetof(struct libeth_xdp_buff, desc) == + offsetof(struct xdp_buff_xsk, cb)); +static_assert(IS_ALIGNED(sizeof(struct xdp_buff_xsk), + __alignof(struct libeth_xdp_buff))); + +/* Common Tx bits */ + +/** + * enum - libeth_xdp internal Tx flags + * @LIBETH_XDP_TX_BULK: one bulk size at which it will be flushed to the queue + * @LIBETH_XDP_TX_BATCH: batch size for which the queue fill loop is unrolled + * @LIBETH_XDP_TX_DROP: indicates the send function must drop frames not sent + */ +enum { + LIBETH_XDP_TX_BULK = DEV_MAP_BULK_SIZE, + LIBETH_XDP_TX_BATCH = 8, + + LIBETH_XDP_TX_DROP = BIT(0), +}; + +/** + * enum - &libeth_xdp_tx_frame and &libeth_xdp_tx_desc flags + * @LIBETH_XDP_TX_LEN: only for ``XDP_TX``, [15:0] of ::len_fl is actual length + * @LIBETH_XDP_TX_FIRST: indicates the frag is the first one of the frame + * @LIBETH_XDP_TX_LAST: whether the frag is the last one of the frame + * @LIBETH_XDP_TX_MULTI: whether the frame contains several frags + * @LIBETH_XDP_TX_FLAGS: only for ``XDP_TX``, [31:16] of ::len_fl is flags + */ +enum { + LIBETH_XDP_TX_LEN = GENMASK(15, 0), + + LIBETH_XDP_TX_FIRST = BIT(16), + LIBETH_XDP_TX_LAST = BIT(17), + LIBETH_XDP_TX_MULTI = BIT(18), + + LIBETH_XDP_TX_FLAGS = GENMASK(31, 16), +}; + +/** + * struct libeth_xdp_tx_frame - represents one XDP Tx element + * @data: frame start pointer for ``XDP_TX`` + * @len_fl: ``XDP_TX``, combined flags [31:16] and len [15:0] field for speed + * @soff: ``XDP_TX``, offset from @data to the start of &skb_shared_info + * @frag: one (non-head) frag for ``XDP_TX`` + */ +struct libeth_xdp_tx_frame { + union { + /* ``XDP_TX`` */ + struct { + void *data; + u32 len_fl; + u32 soff; + }; + + /* ``XDP_TX`` frag */ + skb_frag_t frag; + }; +} __aligned_largest; +static_assert(offsetof(struct libeth_xdp_tx_frame, frag.len) == + offsetof(struct libeth_xdp_tx_frame, len_fl)); + +/** + * struct libeth_xdp_tx_bulk - XDP Tx frame bulk for bulk sending + * @prog: corresponding active XDP program + * @dev: &net_device which the frames are transmitted on + * @xdpsq: shortcut to the corresponding driver-specific XDPSQ structure + * @count: current number of frames in @bulk + * @bulk: array of queued frames for bulk Tx + * + * All XDP Tx operations queue each frame to the bulk first and flush it + * when @count reaches the array end. Bulk is always placed on the stack + * for performance. One bulk element contains all the data necessary + * for sending a frame and then freeing it on completion. + */ +struct libeth_xdp_tx_bulk { + const struct bpf_prog *prog; + struct net_device *dev; + void *xdpsq; + + u32 count; + struct libeth_xdp_tx_frame bulk[LIBETH_XDP_TX_BULK]; +} __aligned(sizeof(struct libeth_xdp_tx_frame)); + +/** + * struct libeth_xdpsq - abstraction for an XDPSQ + * @sqes: array of Tx buffers from the actual queue struct + * @descs: opaque pointer to the HW descriptor array + * @ntu: pointer to the next free descriptor index + * @count: number of descriptors on that queue + * @pending: pointer to the number of sent-not-completed descs on that queue + * @xdp_tx: pointer to the above + * + * Abstraction for driver-independent implementation of Tx. Placed on the stack + * and filled by the driver before the transmission, so that the generic + * functions can access and modify driver-specific resources. + */ +struct libeth_xdpsq { + struct libeth_sqe *sqes; + void *descs; + + u32 *ntu; + u32 count; + + u32 *pending; + u32 *xdp_tx; +}; + +/** + * struct libeth_xdp_tx_desc - abstraction for an XDP Tx descriptor + * @addr: DMA address of the frame + * @len: length of the frame + * @flags: XDP Tx flags + * @opts: combined @len + @flags for speed + * + * Filled by the generic functions and then passed to driver-specific functions + * to fill a HW Tx descriptor, always placed on the [function] stack. + */ +struct libeth_xdp_tx_desc { + dma_addr_t addr; + union { + struct { + u32 len; + u32 flags; + }; + aligned_u64 opts; + }; +} __aligned_largest; + +/** + * libeth_xdp_tx_xmit_bulk - main XDP Tx function + * @bulk: array of frames to send + * @xdpsq: pointer to the driver-specific XDPSQ struct + * @n: number of frames to send + * @unroll: whether to unroll the queue filling loop for speed + * @priv: driver-specific private data + * @prep: callback for cleaning the queue and filling abstract &libeth_xdpsq + * @fill: internal callback for filling &libeth_sqe and &libeth_xdp_tx_desc + * @xmit: callback for filling a HW descriptor with the frame info + * + * Internal abstraction for placing @n XDP Tx frames on the HW XDPSQ. Used for + * all types of frames. + * @unroll greatly increases the object code size, but also greatly increases + * performance. + * The compilers inline all those onstack abstractions to direct data accesses. + * + * Return: number of frames actually placed on the queue, <= @n. The function + * can't fail, but can send less frames if there's no enough free descriptors + * available. The actual free space is returned by @prep from the driver. + */ +static __always_inline u32 +libeth_xdp_tx_xmit_bulk(const struct libeth_xdp_tx_frame *bulk, void *xdpsq, + u32 n, bool unroll, u64 priv, + u32 (*prep)(void *xdpsq, struct libeth_xdpsq *sq), + struct libeth_xdp_tx_desc + (*fill)(struct libeth_xdp_tx_frame frm, u32 i, + const struct libeth_xdpsq *sq, u64 priv), + void (*xmit)(struct libeth_xdp_tx_desc desc, u32 i, + const struct libeth_xdpsq *sq, u64 priv)) +{ + u32 this, batched, off = 0; + struct libeth_xdpsq sq; + u32 ntu, i = 0; + + n = min(n, prep(xdpsq, &sq)); + if (unlikely(!n)) + return 0; + + ntu = *sq.ntu; + + this = sq.count - ntu; + if (likely(this > n)) + this = n; + +again: + if (!unroll) + goto linear; + + batched = ALIGN_DOWN(this, LIBETH_XDP_TX_BATCH); + + for ( ; i < off + batched; i += LIBETH_XDP_TX_BATCH) { + u32 base = ntu + i - off; + + unrolled_count(LIBETH_XDP_TX_BATCH) + for (u32 j = 0; j < LIBETH_XDP_TX_BATCH; j++) + xmit(fill(bulk[i + j], base + j, &sq, priv), + base + j, &sq, priv); + } + + if (batched < this) { +linear: + for ( ; i < off + this; i++) + xmit(fill(bulk[i], ntu + i - off, &sq, priv), + ntu + i - off, &sq, priv); + } + + ntu += this; + if (likely(ntu < sq.count)) + goto out; + + ntu = 0; + + if (i < n) { + this = n - i; + off = i; + + goto again; + } + +out: + *sq.ntu = ntu; + *sq.pending += n; + if (sq.xdp_tx) + *sq.xdp_tx += n; + + return n; +} + +/* ``XDP_TX`` bulking */ + +void libeth_xdp_return_buff_slow(struct libeth_xdp_buff *xdp); + +/** + * libeth_xdp_tx_queue_head - internal helper for queueing one ``XDP_TX`` head + * @bq: XDP Tx bulk to queue the head frag to + * @xdp: XDP buffer with the head to queue + * + * Return: false if it's the only frag of the frame, true if it's an S/G frame. + */ +static inline bool libeth_xdp_tx_queue_head(struct libeth_xdp_tx_bulk *bq, + const struct libeth_xdp_buff *xdp) +{ + const struct xdp_buff *base = &xdp->base; + + bq->bulk[bq->count++] = (typeof(*bq->bulk)){ + .data = xdp->data, + .len_fl = (base->data_end - xdp->data) | LIBETH_XDP_TX_FIRST, + .soff = xdp_data_hard_end(base) - xdp->data, + }; + + if (!xdp_buff_has_frags(base)) + return false; + + bq->bulk[bq->count - 1].len_fl |= LIBETH_XDP_TX_MULTI; + + return true; +} + +/** + * libeth_xdp_tx_queue_frag - internal helper for queueing one ``XDP_TX`` frag + * @bq: XDP Tx bulk to queue the frag to + * @frag: frag to queue + */ +static inline void libeth_xdp_tx_queue_frag(struct libeth_xdp_tx_bulk *bq, + const skb_frag_t *frag) +{ + bq->bulk[bq->count++].frag = *frag; +} + +/** + * libeth_xdp_tx_queue_bulk - internal helper for queueing one ``XDP_TX`` frame + * @bq: XDP Tx bulk to queue the frame to + * @xdp: XDP buffer to queue + * @flush_bulk: driver callback to flush the bulk to the HW queue + * + * Return: true on success, false on flush error. + */ +static __always_inline bool +libeth_xdp_tx_queue_bulk(struct libeth_xdp_tx_bulk *bq, + struct libeth_xdp_buff *xdp, + bool (*flush_bulk)(struct libeth_xdp_tx_bulk *bq, + u32 flags)) +{ + const struct skb_shared_info *sinfo; + bool ret = true; + u32 nr_frags; + + if (unlikely(bq->count == LIBETH_XDP_TX_BULK) && + unlikely(!flush_bulk(bq, 0))) { + libeth_xdp_return_buff_slow(xdp); + return false; + } + + if (!libeth_xdp_tx_queue_head(bq, xdp)) + goto out; + + sinfo = xdp_get_shared_info_from_buff(&xdp->base); + nr_frags = sinfo->nr_frags; + + for (u32 i = 0; i < nr_frags; i++) { + if (unlikely(bq->count == LIBETH_XDP_TX_BULK) && + unlikely(!flush_bulk(bq, 0))) { + ret = false; + break; + } + + libeth_xdp_tx_queue_frag(bq, &sinfo->frags[i]); + } + +out: + bq->bulk[bq->count - 1].len_fl |= LIBETH_XDP_TX_LAST; + xdp->data = NULL; + + return ret; +} + +/** + * libeth_xdp_tx_fill_stats - fill &libeth_sqe with ``XDP_TX`` frame stats + * @sqe: SQ element to fill + * @desc: libeth_xdp Tx descriptor + * @sinfo: &skb_shared_info for this frame + * + * Internal helper for filling an SQE with the frame stats, do not use in + * drivers. Fills the number of frags and bytes for this frame. + */ +#define libeth_xdp_tx_fill_stats(sqe, desc, sinfo) \ + __libeth_xdp_tx_fill_stats(sqe, desc, sinfo, __UNIQUE_ID(sqe_), \ + __UNIQUE_ID(desc_), __UNIQUE_ID(sinfo_)) + +#define __libeth_xdp_tx_fill_stats(sqe, desc, sinfo, ue, ud, us) do { \ + const struct libeth_xdp_tx_desc *ud = (desc); \ + const struct skb_shared_info *us; \ + struct libeth_sqe *ue = (sqe); \ + \ + ue->nr_frags = 1; \ + ue->bytes = ud->len; \ + \ + if (ud->flags & LIBETH_XDP_TX_MULTI) { \ + us = (sinfo); \ + ue->nr_frags += us->nr_frags; \ + ue->bytes += us->xdp_frags_size; \ + } \ +} while (0) + +/** + * libeth_xdp_tx_fill_buf - internal helper to fill one ``XDP_TX`` &libeth_sqe + * @frm: XDP Tx frame from the bulk + * @i: index on the HW queue + * @sq: XDPSQ abstraction for the queue + * @priv: private data + * + * Return: XDP Tx descriptor with the synced DMA and other info to pass to + * the driver callback. + */ +static inline struct libeth_xdp_tx_desc +libeth_xdp_tx_fill_buf(struct libeth_xdp_tx_frame frm, u32 i, + const struct libeth_xdpsq *sq, u64 priv) +{ + struct libeth_xdp_tx_desc desc; + struct skb_shared_info *sinfo; + skb_frag_t *frag = &frm.frag; + struct libeth_sqe *sqe; + netmem_ref netmem; + + if (frm.len_fl & LIBETH_XDP_TX_FIRST) { + sinfo = frm.data + frm.soff; + skb_frag_fill_netmem_desc(frag, virt_to_netmem(frm.data), + offset_in_page(frm.data), + frm.len_fl); + } else { + sinfo = NULL; + } + + netmem = skb_frag_netmem(frag); + desc = (typeof(desc)){ + .addr = page_pool_get_dma_addr_netmem(netmem) + + skb_frag_off(frag), + .len = skb_frag_size(frag) & LIBETH_XDP_TX_LEN, + .flags = skb_frag_size(frag) & LIBETH_XDP_TX_FLAGS, + }; + + if (sinfo || !netmem_is_net_iov(netmem)) { + const struct page_pool *pp = __netmem_get_pp(netmem); + + dma_sync_single_for_device(pp->p.dev, desc.addr, desc.len, + DMA_BIDIRECTIONAL); + } + + if (!sinfo) + return desc; + + sqe = &sq->sqes[i]; + sqe->type = LIBETH_SQE_XDP_TX; + sqe->sinfo = sinfo; + libeth_xdp_tx_fill_stats(sqe, &desc, sinfo); + + return desc; +} + +void libeth_xdp_tx_exception(struct libeth_xdp_tx_bulk *bq, u32 sent, + u32 flags); + +/** + * __libeth_xdp_tx_flush_bulk - internal helper to flush one XDP Tx bulk + * @bq: bulk to flush + * @flags: XDP TX flags + * @prep: driver-specific callback to prepare the queue for sending + * @fill: libeth_xdp callback to fill &libeth_sqe and &libeth_xdp_tx_desc + * @xmit: driver callback to fill a HW descriptor + * + * Internal abstraction to create bulk flush functions for drivers. + * + * Return: true if anything was sent, false otherwise. + */ +static __always_inline bool +__libeth_xdp_tx_flush_bulk(struct libeth_xdp_tx_bulk *bq, u32 flags, + u32 (*prep)(void *xdpsq, struct libeth_xdpsq *sq), + struct libeth_xdp_tx_desc + (*fill)(struct libeth_xdp_tx_frame frm, u32 i, + const struct libeth_xdpsq *sq, u64 priv), + void (*xmit)(struct libeth_xdp_tx_desc desc, u32 i, + const struct libeth_xdpsq *sq, + u64 priv)) +{ + u32 sent, drops; + int err = 0; + + sent = libeth_xdp_tx_xmit_bulk(bq->bulk, bq->xdpsq, + min(bq->count, LIBETH_XDP_TX_BULK), + false, 0, prep, fill, xmit); + drops = bq->count - sent; + + if (unlikely(drops)) { + libeth_xdp_tx_exception(bq, sent, flags); + err = -ENXIO; + } else { + bq->count = 0; + } + + trace_xdp_bulk_tx(bq->dev, sent, drops, err); + + return likely(sent); +} + +/** + * libeth_xdp_tx_flush_bulk - wrapper to define flush of one ``XDP_TX`` bulk + * @bq: bulk to flush + * @flags: Tx flags, see above + * @prep: driver callback to prepare the queue + * @xmit: driver callback to fill a HW descriptor + */ +#define libeth_xdp_tx_flush_bulk(bq, flags, prep, xmit) \ + __libeth_xdp_tx_flush_bulk(bq, flags, prep, libeth_xdp_tx_fill_buf, \ + xmit) + +/* Rx polling path */ + +static inline void libeth_xdp_return_va(const void *data, bool napi) +{ + netmem_ref netmem = virt_to_netmem(data); + + page_pool_put_full_netmem(__netmem_get_pp(netmem), netmem, napi); +} + +static inline void libeth_xdp_return_frags(const struct skb_shared_info *sinfo, + bool napi) +{ + for (u32 i = 0; i < sinfo->nr_frags; i++) { + netmem_ref netmem = skb_frag_netmem(&sinfo->frags[i]); + + page_pool_put_full_netmem(netmem_get_pp(netmem), netmem, napi); + } +} + +/** + * libeth_xdp_return_buff - free/recycle &libeth_xdp_buff + * @xdp: buffer to free + * + * Hotpath helper to free &libeth_xdp_buff. Comparing to xdp_return_buff(), + * it's faster as it gets inlined and always assumes order-0 pages and safe + * direct recycling. Zeroes @xdp->data to avoid UAFs. + */ +#define libeth_xdp_return_buff(xdp) __libeth_xdp_return_buff(xdp, true) + +static inline void __libeth_xdp_return_buff(struct libeth_xdp_buff *xdp, + bool napi) +{ + if (!xdp_buff_has_frags(&xdp->base)) + goto out; + + libeth_xdp_return_frags(xdp_get_shared_info_from_buff(&xdp->base), + napi); + +out: + libeth_xdp_return_va(xdp->data, napi); + xdp->data = NULL; +} + +#endif /* __LIBETH_XDP_H */ diff --git a/drivers/net/ethernet/intel/libeth/xdp.c b/drivers/net/ethernet/intel/libeth/xdp.c new file mode 100644 index 000000000000..7fdb35d36f6f --- /dev/null +++ b/drivers/net/ethernet/intel/libeth/xdp.c @@ -0,0 +1,87 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (C) 2025 Intel Corporation */ + +#define DEFAULT_SYMBOL_NAMESPACE "LIBETH_XDP" + +#include + +/* ``XDP_TX`` bulking */ + +static void __cold +libeth_xdp_tx_return_one(const struct libeth_xdp_tx_frame *frm) +{ + if (frm->len_fl & LIBETH_XDP_TX_MULTI) + libeth_xdp_return_frags(frm->data + frm->soff, true); + + libeth_xdp_return_va(frm->data, true); +} + +static void __cold +libeth_xdp_tx_return_bulk(const struct libeth_xdp_tx_frame *bq, u32 count) +{ + for (u32 i = 0; i < count; i++) { + const struct libeth_xdp_tx_frame *frm = &bq[i]; + + if (!(frm->len_fl & LIBETH_XDP_TX_FIRST)) + continue; + + libeth_xdp_tx_return_one(frm); + } +} + +static void __cold libeth_trace_xdp_exception(const struct net_device *dev, + const struct bpf_prog *prog, + u32 act) +{ + trace_xdp_exception(dev, prog, act); +} + +/** + * libeth_xdp_tx_exception - handle Tx exceptions of XDP frames + * @bq: XDP Tx frame bulk + * @sent: number of frames sent successfully (from this bulk) + * @flags: internal libeth_xdp flags + * + * Cold helper used by __libeth_xdp_tx_flush_bulk(), do not call directly. + * Reports XDP Tx exceptions, frees the frames that won't be sent or adjust + * the Tx bulk to try again later. + */ +void __cold libeth_xdp_tx_exception(struct libeth_xdp_tx_bulk *bq, u32 sent, + u32 flags) +{ + const struct libeth_xdp_tx_frame *pos = &bq->bulk[sent]; + u32 left = bq->count - sent; + + libeth_trace_xdp_exception(bq->dev, bq->prog, XDP_TX); + + if (!(flags & LIBETH_XDP_TX_DROP)) { + memmove(bq->bulk, pos, left * sizeof(*bq->bulk)); + bq->count = left; + + return; + } + + libeth_xdp_tx_return_bulk(pos, left); + + bq->count = 0; +} +EXPORT_SYMBOL_GPL(libeth_xdp_tx_exception); + +/* Rx polling path */ + +/** + * libeth_xdp_return_buff_slow - free &libeth_xdp_buff + * @xdp: buffer to free/return + * + * Slowpath version of libeth_xdp_return_buff() to be called on exceptions, + * queue clean-ups etc., without unwanted inlining. + */ +void __cold libeth_xdp_return_buff_slow(struct libeth_xdp_buff *xdp) +{ + __libeth_xdp_return_buff(xdp, false); +} +EXPORT_SYMBOL_GPL(libeth_xdp_return_buff_slow); + +MODULE_DESCRIPTION("Common Ethernet library - XDP infra"); +MODULE_IMPORT_NS("LIBETH"); +MODULE_LICENSE("GPL"); From patchwork Tue Apr 15 17:28:13 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 14052490 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 912B3224243; Tue, 15 Apr 2025 17:29:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.17 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744738152; cv=none; b=HsOn7MtfenbN1Y5L/poipHJgCXzoLcfPFWZv9h3deJYLSq8qzcz/wn5IBSsnFdyTHmJ8TTO20cdKvAHBilAJhkcimq3Wk/w3NR6n4Icv9EBpPfRBeaBcegcEfaO+b60Haaa/gHy0vF5sZdzv9B5S/KBTgMwDhYKjgYSQtNbXkE4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744738152; c=relaxed/simple; bh=K13b147mWFnfwD7/TXLuT2ouBWKQI99KzFIV/K87xNw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ZCrC4y9YODakC8AdRxbKpRQmOShvpNsF00WhmhKvZ4nc7/tOIZiT7wslrPaSGvNpZP9E7Sbg3EIGFpoLn2C6gzpNxzGwOAq9g936+xLUPJ84cscIp3Xg1YEjEJXutXd4wGMe7Xy1VOrstmKjqxdkxxaA9ZzWxq1OkcDPRgUznjA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=G8JDgF5b; arc=none smtp.client-ip=198.175.65.17 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="G8JDgF5b" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1744738151; x=1776274151; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=K13b147mWFnfwD7/TXLuT2ouBWKQI99KzFIV/K87xNw=; b=G8JDgF5bxI9sDDOgPYPCjOxhikHL3ZkU2VGOAjKUdCP1QhdPwipXlhmI M4B303s/VXnOCdQGGVOx7QyM3daBKZhb7vLx6IaKfkxXzjNMamrZiQOOa MUnrqTvFQp9EEyCWWy78DynMuyGxnJzgSOCL+hCMamnbtUF076nFnPYwQ xY8jXFbVSbo4i3H4oyxKk+YHuPTNYFmpDGHgeE9Mpjq/mAS17UpRGIpm9 j9GLWCKd+Vnc29ek/GTi3+YahhYxQaDJPZrOtzRU0HRaD8mc6KF2sQgZP iVLBqa7NkR5Uaa27jdXAVH9FFFzXQibOmd/Xq/tpo/FK4K3YOE2FCZOHz Q==; X-CSE-ConnectionGUID: 5q/UHRYoTNazskAv2nCaqA== X-CSE-MsgGUID: M5enw1LmR/Sm+5mpyILftQ== X-IronPort-AV: E=McAfee;i="6700,10204,11404"; a="46275656" X-IronPort-AV: E=Sophos;i="6.15,213,1739865600"; d="scan'208";a="46275656" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by orvoesa109.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Apr 2025 10:29:10 -0700 X-CSE-ConnectionGUID: urrxdxPbSZ+fqXLmHmNueA== X-CSE-MsgGUID: /b3KwhUhRfeJYvmmYObpiQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,213,1739865600"; d="scan'208";a="130729643" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa010.fm.intel.com with ESMTP; 15 Apr 2025 10:29:06 -0700 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Cc: Alexander Lobakin , Michal Kubiak , Maciej Fijalkowski , Tony Nguyen , Przemek Kitszel , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Simon Horman , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH iwl-next 04/16] libeth: xdp: add .ndo_xdp_xmit() helpers Date: Tue, 15 Apr 2025 19:28:13 +0200 Message-ID: <20250415172825.3731091-5-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250415172825.3731091-1-aleksander.lobakin@intel.com> References: <20250415172825.3731091-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Add helpers for implementing .ndo_xdp_xmit(). Same as for XDP_TX, accumulate up to 16 DMA-mapped frames on the stack, then flush. If DMA mapping is failed for some reason, don't try mapping further frames, but still flush what was already prepared. DMA address of a head frame is stored in its headroom, assuming it has enough of it for an 8 (or 4) byte value. In addition to @prep and @xmit driver callbacks in XDP_TX, xmit also needs @finalize to kick the XDPSQ after filling. Signed-off-by: Alexander Lobakin --- include/net/libeth/tx.h | 6 + include/net/libeth/xdp.h | 290 +++++++++++++++++++++++- drivers/net/ethernet/intel/libeth/xdp.c | 37 ++- 3 files changed, 328 insertions(+), 5 deletions(-) diff --git a/include/net/libeth/tx.h b/include/net/libeth/tx.h index 3e68d11914f7..e2b62a8b4c57 100644 --- a/include/net/libeth/tx.h +++ b/include/net/libeth/tx.h @@ -19,6 +19,8 @@ * @LIBETH_SQE_SKB: &sk_buff, unmap and napi_consume_skb(), update stats * @__LIBETH_SQE_XDP_START: separator between skb and XDP types * @LIBETH_SQE_XDP_TX: &skb_shared_info, libeth_xdp_return_buff_bulk(), stats + * @LIBETH_SQE_XDP_XMIT: &xdp_frame, unmap and xdp_return_frame_bulk(), stats + * @LIBETH_SQE_XDP_XMIT_FRAG: &xdp_frame frag, only unmap DMA */ enum libeth_sqe_type { LIBETH_SQE_EMPTY = 0U, @@ -29,6 +31,8 @@ enum libeth_sqe_type { __LIBETH_SQE_XDP_START, LIBETH_SQE_XDP_TX = __LIBETH_SQE_XDP_START, + LIBETH_SQE_XDP_XMIT, + LIBETH_SQE_XDP_XMIT_FRAG, }; /** @@ -38,6 +42,7 @@ enum libeth_sqe_type { * @raw: slab buffer to free via kfree() * @skb: &sk_buff to consume * @sinfo: skb shared info of an XDP_TX frame + * @xdpf: XDP frame from ::ndo_xdp_xmit() * @dma: DMA address to unmap * @len: length of the mapped region to unmap * @nr_frags: number of frags in the frame this buffer belongs to @@ -53,6 +58,7 @@ struct libeth_sqe { void *raw; struct sk_buff *skb; struct skb_shared_info *sinfo; + struct xdp_frame *xdpf; }; DEFINE_DMA_UNMAP_ADDR(dma); diff --git a/include/net/libeth/xdp.h b/include/net/libeth/xdp.h index 946fc8081987..5438271662bd 100644 --- a/include/net/libeth/xdp.h +++ b/include/net/libeth/xdp.h @@ -11,6 +11,17 @@ #include #include +/* + * Defined as bits to be able to use them as a mask on Rx. + * Also used as internal return values on Tx. + */ +enum { + LIBETH_XDP_PASS = 0U, + LIBETH_XDP_DROP = BIT(0), + LIBETH_XDP_ABORTED = BIT(1), + LIBETH_XDP_TX = BIT(2), +}; + /* * &xdp_buff_xsk is the largest structure &libeth_xdp_buff gets casted to, * pick maximum pointer-compatible alignment. @@ -56,12 +67,14 @@ static_assert(IS_ALIGNED(sizeof(struct xdp_buff_xsk), * @LIBETH_XDP_TX_BULK: one bulk size at which it will be flushed to the queue * @LIBETH_XDP_TX_BATCH: batch size for which the queue fill loop is unrolled * @LIBETH_XDP_TX_DROP: indicates the send function must drop frames not sent + * @LIBETH_XDP_TX_NDO: whether the send function is called from .ndo_xdp_xmit() */ enum { LIBETH_XDP_TX_BULK = DEV_MAP_BULK_SIZE, LIBETH_XDP_TX_BATCH = 8, LIBETH_XDP_TX_DROP = BIT(0), + LIBETH_XDP_TX_NDO = BIT(1), }; /** @@ -88,6 +101,11 @@ enum { * @len_fl: ``XDP_TX``, combined flags [31:16] and len [15:0] field for speed * @soff: ``XDP_TX``, offset from @data to the start of &skb_shared_info * @frag: one (non-head) frag for ``XDP_TX`` + * @xdpf: &xdp_frame for the head frag for .ndo_xdp_xmit() + * @dma: DMA address of the non-head frag for .ndo_xdp_xmit() + * @len: frag length for .ndo_xdp_xmit() + * @flags: Tx flags for the above + * @opts: combined @len + @flags for the above for speed */ struct libeth_xdp_tx_frame { union { @@ -100,6 +118,21 @@ struct libeth_xdp_tx_frame { /* ``XDP_TX`` frag */ skb_frag_t frag; + + /* .ndo_xdp_xmit() */ + struct { + union { + struct xdp_frame *xdpf; + dma_addr_t dma; + }; + union { + struct { + u32 len; + u32 flags; + }; + aligned_u64 opts; + }; + }; }; } __aligned_largest; static_assert(offsetof(struct libeth_xdp_tx_frame, frag.len) == @@ -107,7 +140,7 @@ static_assert(offsetof(struct libeth_xdp_tx_frame, frag.len) == /** * struct libeth_xdp_tx_bulk - XDP Tx frame bulk for bulk sending - * @prog: corresponding active XDP program + * @prog: corresponding active XDP program, %NULL for .ndo_xdp_xmit() * @dev: &net_device which the frames are transmitted on * @xdpsq: shortcut to the corresponding driver-specific XDPSQ structure * @count: current number of frames in @bulk @@ -438,7 +471,7 @@ void libeth_xdp_tx_exception(struct libeth_xdp_tx_bulk *bq, u32 sent, /** * __libeth_xdp_tx_flush_bulk - internal helper to flush one XDP Tx bulk * @bq: bulk to flush - * @flags: XDP TX flags + * @flags: XDP TX flags (.ndo_xdp_xmit() etc.) * @prep: driver-specific callback to prepare the queue for sending * @fill: libeth_xdp callback to fill &libeth_sqe and &libeth_xdp_tx_desc * @xmit: driver callback to fill a HW descriptor @@ -488,6 +521,259 @@ __libeth_xdp_tx_flush_bulk(struct libeth_xdp_tx_bulk *bq, u32 flags, __libeth_xdp_tx_flush_bulk(bq, flags, prep, libeth_xdp_tx_fill_buf, \ xmit) +/* .ndo_xdp_xmit() implementation */ + +/** + * libeth_xdp_xmit_frame_dma - internal helper to access DMA of an &xdp_frame + * @xf: pointer to the XDP frame + * + * There's no place in &libeth_xdp_tx_frame to store DMA address for an + * &xdp_frame head. The headroom is used then, the address is placed right + * after the frame struct, naturally aligned. + * + * Return: pointer to the DMA address to use. + */ +#define libeth_xdp_xmit_frame_dma(xf) \ + _Generic((xf), \ + const struct xdp_frame *: \ + (const dma_addr_t *)__libeth_xdp_xmit_frame_dma(xf), \ + struct xdp_frame *: \ + (dma_addr_t *)__libeth_xdp_xmit_frame_dma(xf) \ + ) + +static inline void *__libeth_xdp_xmit_frame_dma(const struct xdp_frame *xdpf) +{ + void *addr = (void *)(xdpf + 1); + + if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && + __alignof(*xdpf) < sizeof(dma_addr_t)) + addr = PTR_ALIGN(addr, sizeof(dma_addr_t)); + + return addr; +} + +/** + * libeth_xdp_xmit_queue_head - internal helper for queueing one XDP xmit head + * @bq: XDP Tx bulk to queue the head frag to + * @xdpf: XDP frame with the head to queue + * @dev: device to perform DMA mapping + * + * Return: ``LIBETH_XDP_DROP`` on DMA mapping error, + * ``LIBETH_XDP_PASS`` if it's the only frag in the frame, + * ``LIBETH_XDP_TX`` if it's an S/G frame. + */ +static inline u32 libeth_xdp_xmit_queue_head(struct libeth_xdp_tx_bulk *bq, + struct xdp_frame *xdpf, + struct device *dev) +{ + dma_addr_t dma; + + dma = dma_map_single(dev, xdpf->data, xdpf->len, DMA_TO_DEVICE); + if (dma_mapping_error(dev, dma)) + return LIBETH_XDP_DROP; + + *libeth_xdp_xmit_frame_dma(xdpf) = dma; + + bq->bulk[bq->count++] = (typeof(*bq->bulk)){ + .xdpf = xdpf, + .len = xdpf->len, + .flags = LIBETH_XDP_TX_FIRST, + }; + + if (!xdp_frame_has_frags(xdpf)) + return LIBETH_XDP_PASS; + + bq->bulk[bq->count - 1].flags |= LIBETH_XDP_TX_MULTI; + + return LIBETH_XDP_TX; +} + +/** + * libeth_xdp_xmit_queue_frag - internal helper for queueing one XDP xmit frag + * @bq: XDP Tx bulk to queue the frag to + * @frag: frag to queue + * @dev: device to perform DMA mapping + * + * Return: true on success, false on DMA mapping error. + */ +static inline bool libeth_xdp_xmit_queue_frag(struct libeth_xdp_tx_bulk *bq, + const skb_frag_t *frag, + struct device *dev) +{ + dma_addr_t dma; + + dma = skb_frag_dma_map(dev, frag); + if (dma_mapping_error(dev, dma)) + return false; + + bq->bulk[bq->count++] = (typeof(*bq->bulk)){ + .dma = dma, + .len = skb_frag_size(frag), + }; + + return true; +} + +/** + * libeth_xdp_xmit_queue_bulk - internal helper for queueing one XDP xmit frame + * @bq: XDP Tx bulk to queue the frame to + * @xdpf: XDP frame to queue + * @flush_bulk: driver callback to flush the bulk to the HW queue + * + * Return: ``LIBETH_XDP_TX`` on success, + * ``LIBETH_XDP_DROP`` if the frame should be dropped by the stack, + * ``LIBETH_XDP_ABORTED`` if the frame will be dropped by libeth_xdp. + */ +static __always_inline u32 +libeth_xdp_xmit_queue_bulk(struct libeth_xdp_tx_bulk *bq, + struct xdp_frame *xdpf, + bool (*flush_bulk)(struct libeth_xdp_tx_bulk *bq, + u32 flags)) +{ + u32 head, nr_frags, i, ret = LIBETH_XDP_TX; + struct device *dev = bq->dev->dev.parent; + const struct skb_shared_info *sinfo; + + if (unlikely(bq->count == LIBETH_XDP_TX_BULK) && + unlikely(!flush_bulk(bq, LIBETH_XDP_TX_NDO))) + return LIBETH_XDP_DROP; + + head = libeth_xdp_xmit_queue_head(bq, xdpf, dev); + if (head == LIBETH_XDP_PASS) + goto out; + else if (head == LIBETH_XDP_DROP) + return LIBETH_XDP_DROP; + + sinfo = xdp_get_shared_info_from_frame(xdpf); + nr_frags = sinfo->nr_frags; + + for (i = 0; i < nr_frags; i++) { + if (unlikely(bq->count == LIBETH_XDP_TX_BULK) && + unlikely(!flush_bulk(bq, LIBETH_XDP_TX_NDO))) + break; + + if (!libeth_xdp_xmit_queue_frag(bq, &sinfo->frags[i], dev)) + break; + } + + if (unlikely(i < nr_frags)) + ret = LIBETH_XDP_ABORTED; + +out: + bq->bulk[bq->count - 1].flags |= LIBETH_XDP_TX_LAST; + + return ret; +} + +/** + * libeth_xdp_xmit_fill_buf - internal helper to fill one XDP xmit &libeth_sqe + * @frm: XDP Tx frame from the bulk + * @i: index on the HW queue + * @sq: XDPSQ abstraction for the queue + * @priv: private data + * + * Return: XDP Tx descriptor with the mapped DMA and other info to pass to + * the driver callback. + */ +static inline struct libeth_xdp_tx_desc +libeth_xdp_xmit_fill_buf(struct libeth_xdp_tx_frame frm, u32 i, + const struct libeth_xdpsq *sq, u64 priv) +{ + struct libeth_xdp_tx_desc desc; + struct libeth_sqe *sqe; + struct xdp_frame *xdpf; + + if (frm.flags & LIBETH_XDP_TX_FIRST) { + xdpf = frm.xdpf; + desc.addr = *libeth_xdp_xmit_frame_dma(xdpf); + } else { + xdpf = NULL; + desc.addr = frm.dma; + } + desc.opts = frm.opts; + + sqe = &sq->sqes[i]; + dma_unmap_addr_set(sqe, dma, desc.addr); + dma_unmap_len_set(sqe, len, desc.len); + + if (!xdpf) { + sqe->type = LIBETH_SQE_XDP_XMIT_FRAG; + return desc; + } + + sqe->type = LIBETH_SQE_XDP_XMIT; + sqe->xdpf = xdpf; + libeth_xdp_tx_fill_stats(sqe, &desc, + xdp_get_shared_info_from_frame(xdpf)); + + return desc; +} + +/** + * libeth_xdp_xmit_flush_bulk - wrapper to define flush of one XDP xmit bulk + * @bq: bulk to flush + * @flags: Tx flags, see __libeth_xdp_tx_flush_bulk() + * @prep: driver callback to prepare the queue + * @xmit: driver callback to fill a HW descriptor + */ +#define libeth_xdp_xmit_flush_bulk(bq, flags, prep, xmit) \ + __libeth_xdp_tx_flush_bulk(bq, (flags) | LIBETH_XDP_TX_NDO, prep, \ + libeth_xdp_xmit_fill_buf, xmit) + +u32 libeth_xdp_xmit_return_bulk(const struct libeth_xdp_tx_frame *bq, + u32 count, const struct net_device *dev); + +/** + * __libeth_xdp_xmit_do_bulk - internal function to implement .ndo_xdp_xmit() + * @bq: XDP Tx bulk to queue frames to + * @frames: XDP frames passed by the stack + * @n: number of frames + * @flags: flags passed by the stack + * @flush_bulk: driver callback to flush an XDP xmit bulk + * @finalize: driver callback to finalize sending XDP Tx frames on the queue + * + * Perform common checks, map the frags and queue them to the bulk, then flush + * the bulk to the XDPSQ. If requested by the stack, finalize the queue. + * + * Return: number of frames send or -errno on error. + */ +static __always_inline int +__libeth_xdp_xmit_do_bulk(struct libeth_xdp_tx_bulk *bq, + struct xdp_frame **frames, u32 n, u32 flags, + bool (*flush_bulk)(struct libeth_xdp_tx_bulk *bq, + u32 flags), + void (*finalize)(void *xdpsq, bool sent, bool flush)) +{ + u32 nxmit = 0; + + if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK)) + return -EINVAL; + + for (u32 i = 0; likely(i < n); i++) { + u32 ret; + + ret = libeth_xdp_xmit_queue_bulk(bq, frames[i], flush_bulk); + if (unlikely(ret != LIBETH_XDP_TX)) { + nxmit += ret == LIBETH_XDP_ABORTED; + break; + } + + nxmit++; + } + + if (bq->count) { + flush_bulk(bq, LIBETH_XDP_TX_NDO); + if (unlikely(bq->count)) + nxmit -= libeth_xdp_xmit_return_bulk(bq->bulk, + bq->count, + bq->dev); + } + + finalize(bq->xdpsq, nxmit, flags & XDP_XMIT_FLUSH); + + return nxmit; +} + /* Rx polling path */ static inline void libeth_xdp_return_va(const void *data, bool napi) diff --git a/drivers/net/ethernet/intel/libeth/xdp.c b/drivers/net/ethernet/intel/libeth/xdp.c index 7fdb35d36f6f..0908520d1d38 100644 --- a/drivers/net/ethernet/intel/libeth/xdp.c +++ b/drivers/net/ethernet/intel/libeth/xdp.c @@ -40,7 +40,7 @@ static void __cold libeth_trace_xdp_exception(const struct net_device *dev, * libeth_xdp_tx_exception - handle Tx exceptions of XDP frames * @bq: XDP Tx frame bulk * @sent: number of frames sent successfully (from this bulk) - * @flags: internal libeth_xdp flags + * @flags: internal libeth_xdp flags (.ndo_xdp_xmit etc.) * * Cold helper used by __libeth_xdp_tx_flush_bulk(), do not call directly. * Reports XDP Tx exceptions, frees the frames that won't be sent or adjust @@ -52,7 +52,8 @@ void __cold libeth_xdp_tx_exception(struct libeth_xdp_tx_bulk *bq, u32 sent, const struct libeth_xdp_tx_frame *pos = &bq->bulk[sent]; u32 left = bq->count - sent; - libeth_trace_xdp_exception(bq->dev, bq->prog, XDP_TX); + if (!(flags & LIBETH_XDP_TX_NDO)) + libeth_trace_xdp_exception(bq->dev, bq->prog, XDP_TX); if (!(flags & LIBETH_XDP_TX_DROP)) { memmove(bq->bulk, pos, left * sizeof(*bq->bulk)); @@ -61,12 +62,42 @@ void __cold libeth_xdp_tx_exception(struct libeth_xdp_tx_bulk *bq, u32 sent, return; } - libeth_xdp_tx_return_bulk(pos, left); + if (!(flags & LIBETH_XDP_TX_NDO)) + libeth_xdp_tx_return_bulk(pos, left); + else + libeth_xdp_xmit_return_bulk(pos, left, bq->dev); bq->count = 0; } EXPORT_SYMBOL_GPL(libeth_xdp_tx_exception); +/* .ndo_xdp_xmit() implementation */ + +u32 __cold libeth_xdp_xmit_return_bulk(const struct libeth_xdp_tx_frame *bq, + u32 count, const struct net_device *dev) +{ + u32 n = 0; + + for (u32 i = 0; i < count; i++) { + const struct libeth_xdp_tx_frame *frm = &bq[i]; + dma_addr_t dma; + + if (frm->flags & LIBETH_XDP_TX_FIRST) + dma = *libeth_xdp_xmit_frame_dma(frm->xdpf); + else + dma = dma_unmap_addr(frm, dma); + + dma_unmap_page(dev->dev.parent, dma, dma_unmap_len(frm, len), + DMA_TO_DEVICE); + + /* Actual xdp_frames are freed by the core */ + n += !!(frm->flags & LIBETH_XDP_TX_FIRST); + } + + return n; +} +EXPORT_SYMBOL_GPL(libeth_xdp_xmit_return_bulk); + /* Rx polling path */ /** From patchwork Tue Apr 15 17:28:14 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 14052491 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EDB46226CF6; Tue, 15 Apr 2025 17:29:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.17 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744738157; cv=none; b=V51rnRb3iO/TR0Z4h2mbfI3Zr/cwOjbSKVAi9bhi1pCFWgN2dFacQzd6EzVIy4uYXjLu09xjbgx/7IOAoI2CTnJYQTXyUSpnz89Zceyb3IX95ODnZ5CBN301FbpDrrNvq0P9ldmxTdghcC7y58YfWuBp2fSsm1J9ozJQp3knCVQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744738157; c=relaxed/simple; bh=8x3F9YcvokycY+S+ycezTyvgntmJbRu3Xyb0iYQg65g=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=sEkgt9i9AWWmTTjRBn3hSXYoswn4/WTMmmw2q7H8bvIAwIxtov3ys7T2/Y7WeB/mCU7lA1JNqU34JY1+B9BS2i+MwuVuft3dWfcBCKpfmuuAAAgeqmFmAViiC6CuslC9KPcEbqLVsER0G/2uPQaPi71WHaLGy93NsPszHCYr1pE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=egWdb7mh; arc=none smtp.client-ip=198.175.65.17 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="egWdb7mh" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1744738155; x=1776274155; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=8x3F9YcvokycY+S+ycezTyvgntmJbRu3Xyb0iYQg65g=; b=egWdb7mhIsD2AveY2HBKOCJjZO9sayqvWL/+FGAam3NyRfYut1l64Q5A cQoeJRMpfkAIcKiRzlWeIQ1TakTAMWbBkR8BuDKBGNiclEqWhCgYSnSg0 ny15Sgt1kHvQCetm9+JAv9qVE3ay6vBiZFWVnaU4DAXt4DFOby44Yrfxv oyYyHCM2eib+ow7L5glpz6n9LBgFJELB5HnpSkuTmMWC1FqUi9b94YyJi SVrN7OaWjiz5lEJvLRAieWgHDexSvGqa3A66xvg67DeC06y7qdQs08qYx /9tgXY/AsFb2iaX1pTdFTTY+i+B52o/Zr1CzwwNurAh8Vch2gxfWQT4xi g==; X-CSE-ConnectionGUID: Vzr0j45cSo+mXuVDQGufRg== X-CSE-MsgGUID: +P7T2bWyTlW1PIOCwzj6sQ== X-IronPort-AV: E=McAfee;i="6700,10204,11404"; a="46275671" X-IronPort-AV: E=Sophos;i="6.15,213,1739865600"; d="scan'208";a="46275671" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by orvoesa109.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Apr 2025 10:29:15 -0700 X-CSE-ConnectionGUID: 4Byl5XnjTMyQJR7gb//wBg== X-CSE-MsgGUID: qHIfDDKVR/+J2Hy0NPPr0w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,213,1739865600"; d="scan'208";a="130729695" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa010.fm.intel.com with ESMTP; 15 Apr 2025 10:29:10 -0700 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Cc: Alexander Lobakin , Michal Kubiak , Maciej Fijalkowski , Tony Nguyen , Przemek Kitszel , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Simon Horman , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH iwl-next 05/16] libeth: xdp: add XDPSQE completion helpers Date: Tue, 15 Apr 2025 19:28:14 +0200 Message-ID: <20250415172825.3731091-6-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250415172825.3731091-1-aleksander.lobakin@intel.com> References: <20250415172825.3731091-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Similarly to libeth_tx_complete(), add libeth_xdp_complete_tx() to handle XDP_TX and xmit buffers. Both use bulk return under the hood. Also add out of line libeth_tx_complete_any() which handles both regular and XDP frames (if libeth_xdp is loaded), for example, to call on queue destroy, where we don't need inlining but convenience. Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/libeth/Makefile | 1 + include/net/libeth/types.h | 21 ++++++- drivers/net/ethernet/intel/libeth/priv.h | 26 +++++++++ include/net/libeth/tx.h | 13 ++++- include/net/libeth/xdp.h | 66 ++++++++++++++++++++++ drivers/net/ethernet/intel/libeth/tx.c | 38 +++++++++++++ drivers/net/ethernet/intel/libeth/xdp.c | 58 +++++++++++++++++++ 7 files changed, 221 insertions(+), 2 deletions(-) create mode 100644 drivers/net/ethernet/intel/libeth/priv.h create mode 100644 drivers/net/ethernet/intel/libeth/tx.c diff --git a/drivers/net/ethernet/intel/libeth/Makefile b/drivers/net/ethernet/intel/libeth/Makefile index 9ba78f463f2e..51669840ee06 100644 --- a/drivers/net/ethernet/intel/libeth/Makefile +++ b/drivers/net/ethernet/intel/libeth/Makefile @@ -4,6 +4,7 @@ obj-$(CONFIG_LIBETH) += libeth.o libeth-y := rx.o +libeth-y += tx.o obj-$(CONFIG_LIBETH_XDP) += libeth_xdp.o diff --git a/include/net/libeth/types.h b/include/net/libeth/types.h index 603825e45133..ad7a5c1f119f 100644 --- a/include/net/libeth/types.h +++ b/include/net/libeth/types.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0-only */ -/* Copyright (C) 2024 Intel Corporation */ +/* Copyright (C) 2024-2025 Intel Corporation */ #ifndef __LIBETH_TYPES_H #define __LIBETH_TYPES_H @@ -22,4 +22,23 @@ struct libeth_sq_napi_stats { }; }; +/** + * struct libeth_xdpsq_napi_stats - "hot" counters to update in XDP Tx + * completion loop + * @packets: completed frames counter + * @bytes: sum of bytes of completed frames above + * @fragments: sum of fragments of completed S/G frames + * @raw: alias to access all the fields as an array + */ +struct libeth_xdpsq_napi_stats { + union { + struct { + u32 packets; + u32 bytes; + u32 fragments; + }; + DECLARE_FLEX_ARRAY(u32, raw); + }; +}; + #endif /* __LIBETH_TYPES_H */ diff --git a/drivers/net/ethernet/intel/libeth/priv.h b/drivers/net/ethernet/intel/libeth/priv.h new file mode 100644 index 000000000000..1bd6e2d7a3e7 --- /dev/null +++ b/drivers/net/ethernet/intel/libeth/priv.h @@ -0,0 +1,26 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2025 Intel Corporation */ + +#ifndef __LIBETH_PRIV_H +#define __LIBETH_PRIV_H + +#include + +/* XDP */ + +struct skb_shared_info; +struct xdp_frame_bulk; + +struct libeth_xdp_ops { + void (*bulk)(const struct skb_shared_info *sinfo, + struct xdp_frame_bulk *bq, bool frags); +}; + +void libeth_attach_xdp(const struct libeth_xdp_ops *ops); + +static inline void libeth_detach_xdp(void) +{ + libeth_attach_xdp(NULL); +} + +#endif /* __LIBETH_PRIV_H */ diff --git a/include/net/libeth/tx.h b/include/net/libeth/tx.h index e2b62a8b4c57..33b9bb22f6ac 100644 --- a/include/net/libeth/tx.h +++ b/include/net/libeth/tx.h @@ -84,7 +84,10 @@ struct libeth_sqe { /** * struct libeth_cq_pp - completion queue poll params * @dev: &device to perform DMA unmapping + * @bq: XDP frame bulk to combine return operations * @ss: onstack NAPI stats to fill + * @xss: onstack XDPSQ NAPI stats to fill + * @xdp_tx: number of XDP frames processed * @napi: whether it's called from the NAPI context * * libeth uses this structure to access objects needed for performing full @@ -93,7 +96,13 @@ struct libeth_sqe { */ struct libeth_cq_pp { struct device *dev; - struct libeth_sq_napi_stats *ss; + struct xdp_frame_bulk *bq; + + union { + struct libeth_sq_napi_stats *ss; + struct libeth_xdpsq_napi_stats *xss; + }; + u32 xdp_tx; bool napi; }; @@ -139,4 +148,6 @@ static inline void libeth_tx_complete(struct libeth_sqe *sqe, sqe->type = LIBETH_SQE_EMPTY; } +void libeth_tx_complete_any(struct libeth_sqe *sqe, struct libeth_cq_pp *cp); + #endif /* __LIBETH_TX_H */ diff --git a/include/net/libeth/xdp.h b/include/net/libeth/xdp.h index 5438271662bd..0db6f9c56516 100644 --- a/include/net/libeth/xdp.h +++ b/include/net/libeth/xdp.h @@ -817,4 +817,70 @@ static inline void __libeth_xdp_return_buff(struct libeth_xdp_buff *xdp, xdp->data = NULL; } +/* Tx buffer completion */ + +void libeth_xdp_return_buff_bulk(const struct skb_shared_info *sinfo, + struct xdp_frame_bulk *bq, bool frags); + +/** + * __libeth_xdp_complete_tx - complete sent XDPSQE + * @sqe: SQ element / Tx buffer to complete + * @cp: Tx polling/completion params + * @bulk: internal callback to bulk-free ``XDP_TX`` buffers + * + * Use the non-underscored version in drivers instead. This one is shared + * internally with libeth_tx_complete_any(). + * Complete an XDPSQE of any type of XDP frame. This includes DMA unmapping + * when needed, buffer freeing, stats update, and SQE invalidation. + */ +static __always_inline void +__libeth_xdp_complete_tx(struct libeth_sqe *sqe, struct libeth_cq_pp *cp, + typeof(libeth_xdp_return_buff_bulk) bulk) +{ + enum libeth_sqe_type type = sqe->type; + + switch (type) { + case LIBETH_SQE_EMPTY: + return; + case LIBETH_SQE_XDP_XMIT: + case LIBETH_SQE_XDP_XMIT_FRAG: + dma_unmap_page(cp->dev, dma_unmap_addr(sqe, dma), + dma_unmap_len(sqe, len), DMA_TO_DEVICE); + break; + default: + break; + } + + switch (type) { + case LIBETH_SQE_XDP_TX: + bulk(sqe->sinfo, cp->bq, sqe->nr_frags != 1); + break; + case LIBETH_SQE_XDP_XMIT: + xdp_return_frame_bulk(sqe->xdpf, cp->bq); + break; + default: + break; + } + + switch (type) { + case LIBETH_SQE_XDP_TX: + case LIBETH_SQE_XDP_XMIT: + cp->xdp_tx -= sqe->nr_frags; + + cp->xss->packets++; + cp->xss->bytes += sqe->bytes; + break; + default: + break; + } + + sqe->type = LIBETH_SQE_EMPTY; +} + +static inline void libeth_xdp_complete_tx(struct libeth_sqe *sqe, + struct libeth_cq_pp *cp) +{ + __libeth_xdp_complete_tx(sqe, cp, libeth_xdp_return_buff_bulk); +} + #endif /* __LIBETH_XDP_H */ diff --git a/drivers/net/ethernet/intel/libeth/tx.c b/drivers/net/ethernet/intel/libeth/tx.c new file mode 100644 index 000000000000..227c841ab16a --- /dev/null +++ b/drivers/net/ethernet/intel/libeth/tx.c @@ -0,0 +1,38 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (C) 2025 Intel Corporation */ + +#define DEFAULT_SYMBOL_NAMESPACE "LIBETH" + +#include + +#include "priv.h" + +/* Tx buffer completion */ + +DEFINE_STATIC_CALL_NULL(bulk, libeth_xdp_return_buff_bulk); + +/** + * libeth_tx_complete_any - perform Tx completion for one SQE of any type + * @sqe: Tx buffer to complete + * @cp: polling params + * + * Can be used to complete both regular and XDP SQEs, for example when + * destroying queues. + * When libeth_xdp is not loaded, XDPSQEs won't be handled. + */ +void libeth_tx_complete_any(struct libeth_sqe *sqe, struct libeth_cq_pp *cp) +{ + if (sqe->type >= __LIBETH_SQE_XDP_START) + __libeth_xdp_complete_tx(sqe, cp, static_call(bulk)); + else + libeth_tx_complete(sqe, cp); +} +EXPORT_SYMBOL_GPL(libeth_tx_complete_any); + +/* Module */ + +void libeth_attach_xdp(const struct libeth_xdp_ops *ops) +{ + static_call_update(bulk, ops ? ops->bulk : NULL); +} +EXPORT_SYMBOL_GPL(libeth_attach_xdp); diff --git a/drivers/net/ethernet/intel/libeth/xdp.c b/drivers/net/ethernet/intel/libeth/xdp.c index 0908520d1d38..904b19df4f79 100644 --- a/drivers/net/ethernet/intel/libeth/xdp.c +++ b/drivers/net/ethernet/intel/libeth/xdp.c @@ -5,6 +5,8 @@ #include +#include "priv.h" + /* ``XDP_TX`` bulking */ static void __cold @@ -113,6 +115,62 @@ void __cold libeth_xdp_return_buff_slow(struct libeth_xdp_buff *xdp) } EXPORT_SYMBOL_GPL(libeth_xdp_return_buff_slow); +/* Tx buffer completion */ + +static void libeth_xdp_put_netmem_bulk(netmem_ref netmem, + struct xdp_frame_bulk *bq) +{ + if (unlikely(bq->count == XDP_BULK_QUEUE_SIZE)) + xdp_flush_frame_bulk(bq); + + bq->q[bq->count++] = netmem; +} + +/** + * libeth_xdp_return_buff_bulk - free &xdp_buff as part of a bulk + * @sinfo: shared info corresponding to the buffer + * @bq: XDP frame bulk to store the buffer + * @frags: whether the buffer has frags + * + * Same as xdp_return_frame_bulk(), but for &libeth_xdp_buff, speeds up Tx + * completion of ``XDP_TX`` buffers and allows to free them in same bulks + * with &xdp_frame buffers. + */ +void libeth_xdp_return_buff_bulk(const struct skb_shared_info *sinfo, + struct xdp_frame_bulk *bq, bool frags) +{ + if (!frags) + goto head; + + for (u32 i = 0; i < sinfo->nr_frags; i++) + libeth_xdp_put_netmem_bulk(skb_frag_netmem(&sinfo->frags[i]), + bq); + +head: + libeth_xdp_put_netmem_bulk(virt_to_netmem(sinfo), bq); +} +EXPORT_SYMBOL_GPL(libeth_xdp_return_buff_bulk); + +/* Module */ + +static const struct libeth_xdp_ops xdp_ops __initconst = { + .bulk = libeth_xdp_return_buff_bulk, +}; + +static int __init libeth_xdp_module_init(void) +{ + libeth_attach_xdp(&xdp_ops); + + return 0; +} +module_init(libeth_xdp_module_init); + +static void __exit libeth_xdp_module_exit(void) +{ + libeth_detach_xdp(); +} +module_exit(libeth_xdp_module_exit); + MODULE_DESCRIPTION("Common Ethernet library - XDP infra"); MODULE_IMPORT_NS("LIBETH"); MODULE_LICENSE("GPL"); From patchwork Tue Apr 15 17:28:15 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 14052492 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B376622127C; Tue, 15 Apr 2025 17:29:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.17 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744738170; cv=none; b=f1VfPtgxUK1oxj/y6P7bBprKFI7znqsaYHrtOo+459cSZ4n+mHkr5FoSDc/316CDxCPiI0kvmr07ndYJvfWUOvk8ydsUoc8oCajb8YjJVQ6H5AFVrwkBuF0nxHpPyxLNoWOWoCeEXEty2i/LCtAJQuzfNjffPwkKlaNecNPrOX0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744738170; c=relaxed/simple; bh=lClT+Klml37vwVCESX+E8Q3/HH+MU78140BiOZ1bWJE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=l7hLNnZZRCSofn1J+oEc7hAB+bcfAP4arei6e3L4yatmRXCHD1BllO4Akgl2mspPV9KI48mPWOwafuNKltHAV91Czs3WnMhmZiyq2wcJ5z0qK10+u4NYHogTXUFoXhCv4t7CdPzP169YAD0DPbCVWzPrDpW7ZsNlj2qbuNM8Jw4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=CCp3xMyJ; arc=none smtp.client-ip=198.175.65.17 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="CCp3xMyJ" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1744738169; x=1776274169; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=lClT+Klml37vwVCESX+E8Q3/HH+MU78140BiOZ1bWJE=; b=CCp3xMyJM42Q1+g9LlE8vcmqFyQS0URjsWD4NxVddRGrKJRc0ZrBpe3G 6ADucjV8vJOQvVd0e+o6+xcsfMi1PGANtQz9/5Dmqxtlys5f8dCaw7/Ec DBahnJDYzpDfbjnemWUogSAoDhHWnuu02eAPL9hvUeRcfVmjS0KjplhEX +s+++qRuWKr0btMmCqfKhf5jgSBOoc3kL+DTFB0+8V1HRQzMFCDwmz1fn wI14LJliNfoADSylBiB3mNAODoDo8a0EBiQQfk15kK3rb0ZqHH6ANMEtf Xk2rftl3LnCgfCid6dosbuWhiYkUvqsHZnIK9SiPh+rGI3m3pPbGXPhNy g==; X-CSE-ConnectionGUID: zSCATPeTSHO2HhFP0eXJfQ== X-CSE-MsgGUID: 31WEgdVOTyezDyRGFTYC9w== X-IronPort-AV: E=McAfee;i="6700,10204,11404"; a="46275688" X-IronPort-AV: E=Sophos;i="6.15,213,1739865600"; d="scan'208";a="46275688" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by orvoesa109.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Apr 2025 10:29:19 -0700 X-CSE-ConnectionGUID: FrlRCKaUR/mJqEaVYzKFxw== X-CSE-MsgGUID: aZ42Y15jRx62rbj5ls6ovA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,213,1739865600"; d="scan'208";a="130729712" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa010.fm.intel.com with ESMTP; 15 Apr 2025 10:29:15 -0700 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Cc: Alexander Lobakin , Michal Kubiak , Maciej Fijalkowski , Tony Nguyen , Przemek Kitszel , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Simon Horman , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH iwl-next 06/16] libeth: xdp: add XDPSQ locking helpers Date: Tue, 15 Apr 2025 19:28:15 +0200 Message-ID: <20250415172825.3731091-7-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250415172825.3731091-1-aleksander.lobakin@intel.com> References: <20250415172825.3731091-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Unfortunately, it's not always possible to allocate max(num_rxqs, nr_cpu_ids) even on hi-end NICs. To mitigate this, add simple locking helpers to libeth_xdp. As long as XDPSQs are not shared, the whole functionality is gated behind a static lock. Otherwise, each bulk flush locks the queue for the time of cleaning and filling the descriptors. As long as this particular queue is not used by more than 1 CPU, the impact is minimal (runtime check for boolean twice per 16+ descriptors). Suggested-by: Maciej Fijalkowski # static key Signed-off-by: Alexander Lobakin --- include/net/libeth/types.h | 21 +++- include/net/libeth/xdp.h | 127 +++++++++++++++++++++++- drivers/net/ethernet/intel/libeth/xdp.c | 47 +++++++++ 3 files changed, 192 insertions(+), 3 deletions(-) diff --git a/include/net/libeth/types.h b/include/net/libeth/types.h index ad7a5c1f119f..abfccae1a346 100644 --- a/include/net/libeth/types.h +++ b/include/net/libeth/types.h @@ -4,7 +4,7 @@ #ifndef __LIBETH_TYPES_H #define __LIBETH_TYPES_H -#include +#include /** * struct libeth_sq_napi_stats - "hot" counters to update in Tx completion loop @@ -41,4 +41,23 @@ struct libeth_xdpsq_napi_stats { }; }; +/* XDP */ + +/* + * The following structures should be embedded into driver's queue structure + * and passed to the libeth_xdp helpers, never used directly. + */ + +/* XDPSQ sharing */ + +/** + * struct libeth_xdpsq_lock - locking primitive for sharing XDPSQs + * @lock: spinlock for locking the queue + * @share: whether this particular queue is shared + */ +struct libeth_xdpsq_lock { + spinlock_t lock; + bool share; +}; + #endif /* __LIBETH_TYPES_H */ diff --git a/include/net/libeth/xdp.h b/include/net/libeth/xdp.h index 0db6f9c56516..47ec5c59a586 100644 --- a/include/net/libeth/xdp.h +++ b/include/net/libeth/xdp.h @@ -60,6 +60,123 @@ static_assert(offsetof(struct libeth_xdp_buff, desc) == static_assert(IS_ALIGNED(sizeof(struct xdp_buff_xsk), __alignof(struct libeth_xdp_buff))); +/* XDPSQ sharing */ + +DECLARE_STATIC_KEY_FALSE(libeth_xdpsq_share); + +/** + * libeth_xdpsq_num - calculate optimal number of XDPSQs for this device + sys + * @rxq: current number of active Rx queues + * @txq: current number of active Tx queues + * @max: maximum number of Tx queues + * + * Each RQ must have its own XDPSQ for XSk pairs, each CPU must have own XDPSQ + * for lockless sending (``XDP_TX``, .ndo_xdp_xmit()). Cap the maximum of these + * two with the number of SQs the device can have (minus used ones). + * + * Return: number of XDP Tx queues the device needs to use. + */ +static inline u32 libeth_xdpsq_num(u32 rxq, u32 txq, u32 max) +{ + return min(max(nr_cpu_ids, rxq), max - txq); +} + +/** + * libeth_xdpsq_shared - whether XDPSQs can be shared between several CPUs + * @num: number of active XDPSQs + * + * Return: true if there's no 1:1 XDPSQ/CPU association, false otherwise. + */ +static inline bool libeth_xdpsq_shared(u32 num) +{ + return num < nr_cpu_ids; +} + +/** + * libeth_xdpsq_id - get XDPSQ index corresponding to this CPU + * @num: number of active XDPSQs + * + * Helper for libeth_xdp routines, do not use in drivers directly. + * + * Return: XDPSQ index needs to be used on this CPU. + */ +static inline u32 libeth_xdpsq_id(u32 num) +{ + u32 ret = raw_smp_processor_id(); + + if (static_branch_unlikely(&libeth_xdpsq_share) && + libeth_xdpsq_shared(num)) + ret %= num; + + return ret; +} + +void __libeth_xdpsq_get(struct libeth_xdpsq_lock *lock, + const struct net_device *dev); +void __libeth_xdpsq_put(struct libeth_xdpsq_lock *lock, + const struct net_device *dev); + +/** + * libeth_xdpsq_get - initialize &libeth_xdpsq_lock + * @lock: lock to initialize + * @dev: netdev which this lock belongs to + * @share: whether XDPSQs can be shared + * + * Tracks the current XDPSQ association and enables the static lock + * if needed. + */ +static inline void libeth_xdpsq_get(struct libeth_xdpsq_lock *lock, + const struct net_device *dev, + bool share) +{ + if (unlikely(share)) + __libeth_xdpsq_get(lock, dev); +} + +/** + * libeth_xdpsq_put - deinitialize &libeth_xdpsq_lock + * @lock: lock to deinitialize + * @dev: netdev which this lock belongs to + * + * Tracks the current XDPSQ association and disables the static lock + * if needed. + */ +static inline void libeth_xdpsq_put(struct libeth_xdpsq_lock *lock, + const struct net_device *dev) +{ + if (static_branch_unlikely(&libeth_xdpsq_share) && lock->share) + __libeth_xdpsq_put(lock, dev); +} + +void __libeth_xdpsq_lock(struct libeth_xdpsq_lock *lock); +void __libeth_xdpsq_unlock(struct libeth_xdpsq_lock *lock); + +/** + * libeth_xdpsq_lock - grab &libeth_xdpsq_lock if needed + * @lock: lock to take + * + * Touches the underlying spinlock only if the static key is enabled + * and the queue itself is marked as shareable. + */ +static inline void libeth_xdpsq_lock(struct libeth_xdpsq_lock *lock) +{ + if (static_branch_unlikely(&libeth_xdpsq_share) && lock->share) + __libeth_xdpsq_lock(lock); +} + +/** + * libeth_xdpsq_unlock - free &libeth_xdpsq_lock if needed + * @lock: lock to free + * + * Touches the underlying spinlock only if the static key is enabled + * and the queue itself is marked as shareable. + */ +static inline void libeth_xdpsq_unlock(struct libeth_xdpsq_lock *lock) +{ + if (static_branch_unlikely(&libeth_xdpsq_share) && lock->share) + __libeth_xdpsq_unlock(lock); +} + /* Common Tx bits */ /** @@ -168,6 +285,7 @@ struct libeth_xdp_tx_bulk { * @count: number of descriptors on that queue * @pending: pointer to the number of sent-not-completed descs on that queue * @xdp_tx: pointer to the above + * @lock: corresponding XDPSQ lock * * Abstraction for driver-independent implementation of Tx. Placed on the stack * and filled by the driver before the transmission, so that the generic @@ -182,6 +300,7 @@ struct libeth_xdpsq { u32 *pending; u32 *xdp_tx; + struct libeth_xdpsq_lock *lock; }; /** @@ -218,7 +337,8 @@ struct libeth_xdp_tx_desc { * * Internal abstraction for placing @n XDP Tx frames on the HW XDPSQ. Used for * all types of frames. - * @unroll greatly increases the object code size, but also greatly increases + * @prep must lock the queue as this function releases it at the end. @unroll + * greatly increases the object code size, but also greatly increases * performance. * The compilers inline all those onstack abstractions to direct data accesses. * @@ -242,7 +362,7 @@ libeth_xdp_tx_xmit_bulk(const struct libeth_xdp_tx_frame *bulk, void *xdpsq, n = min(n, prep(xdpsq, &sq)); if (unlikely(!n)) - return 0; + goto unlock; ntu = *sq.ntu; @@ -291,6 +411,9 @@ libeth_xdp_tx_xmit_bulk(const struct libeth_xdp_tx_frame *bulk, void *xdpsq, if (sq.xdp_tx) *sq.xdp_tx += n; +unlock: + libeth_xdpsq_unlock(sq.lock); + return n; } diff --git a/drivers/net/ethernet/intel/libeth/xdp.c b/drivers/net/ethernet/intel/libeth/xdp.c index 904b19df4f79..0c0aa3b1d49d 100644 --- a/drivers/net/ethernet/intel/libeth/xdp.c +++ b/drivers/net/ethernet/intel/libeth/xdp.c @@ -7,6 +7,53 @@ #include "priv.h" +/* XDPSQ sharing */ + +DEFINE_STATIC_KEY_FALSE(libeth_xdpsq_share); +EXPORT_SYMBOL_GPL(libeth_xdpsq_share); + +void __libeth_xdpsq_get(struct libeth_xdpsq_lock *lock, + const struct net_device *dev) +{ + bool warn; + + spin_lock_init(&lock->lock); + lock->share = true; + + warn = !static_key_enabled(&libeth_xdpsq_share); + static_branch_inc(&libeth_xdpsq_share); + + if (warn && net_ratelimit()) + netdev_warn(dev, "XDPSQ sharing enabled, possible XDP Tx slowdown\n"); +} +EXPORT_SYMBOL_GPL(__libeth_xdpsq_get); + +void __libeth_xdpsq_put(struct libeth_xdpsq_lock *lock, + const struct net_device *dev) +{ + static_branch_dec(&libeth_xdpsq_share); + + if (!static_key_enabled(&libeth_xdpsq_share) && net_ratelimit()) + netdev_notice(dev, "XDPSQ sharing disabled\n"); + + lock->share = false; +} +EXPORT_SYMBOL_GPL(__libeth_xdpsq_put); + +void __acquires(&lock->lock) +__libeth_xdpsq_lock(struct libeth_xdpsq_lock *lock) +{ + spin_lock(&lock->lock); +} +EXPORT_SYMBOL_GPL(__libeth_xdpsq_lock); + +void __releases(&lock->lock) +__libeth_xdpsq_unlock(struct libeth_xdpsq_lock *lock) +{ + spin_unlock(&lock->lock); +} +EXPORT_SYMBOL_GPL(__libeth_xdpsq_unlock); + /* ``XDP_TX`` bulking */ static void __cold From patchwork Tue Apr 15 17:28:16 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 14052493 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BC6D3221552; Tue, 15 Apr 2025 17:29:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.17 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744738172; cv=none; b=Ad5bVSrOg4wBX80on35Mebpi/IS3+/rKpigZcc1FKCzgUldCXMDkt3RJhgfAOdy/k47wu4pVHhEUqsV6buEmhLE9EVmXkWfQEuoqYTxRNQamfxTaFgC1dRNLcogH1WyCIcZQfUqD6V6fCn26yPx0YBAcjj4Cxhrl2J/2IDWfUVc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744738172; c=relaxed/simple; bh=pX0JSn/zMAX5UcgOnlJbrKknoIMx4EDvUg8VXC/h7h8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=WIBeCYABWwhHmlzDf45NnvBHqPirROxcb/iMESD1XYtiEaIb/2Bh9U9u3GLTcYkBRguh9XpQBgoQsNRgHI9YNU1bLv27hWaQJIheWZytKh+4vCV8N5a5Q5oZH+4Htq3DtOr2cugK9yhk+2wT62dgLNu5GrTWFEZbC/DvnvNjuIw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=gQqDO65C; arc=none smtp.client-ip=198.175.65.17 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="gQqDO65C" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1744738171; x=1776274171; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=pX0JSn/zMAX5UcgOnlJbrKknoIMx4EDvUg8VXC/h7h8=; b=gQqDO65Cr1d+c46NouNI6+HQbQIgPRUccRvkBb/pHCExfH74hy8xnhxZ EqlPZkdEnZBG4G8uX0J6F6yTXCzcc4y4KVNPGees+jh1QGvsOJyl9Ycvx 8YuJNMxG+Yh0jrvl23l+3j/M7OdwsSqUVpLJ7vYPRz0d2Ef5z69VKdnZO JXS/jkM2K7rjH5VReFAEWyokhWuIX+j0tUcvQDBXIUU7LVcoQTY/FXWUy Es2ij77rtuG1bBRzL4n7zIOO37I2rxeE/HuV5imy1QGtjzK6A2dqp6ept uK7Nmvu9uUggVwk7FdJhlcpDSgWXdZAZEBir6DeOo9/uJ6NLPT7Q4zwgl Q==; X-CSE-ConnectionGUID: 63xH1cquT0K8wX1IyAirHg== X-CSE-MsgGUID: UTGltXtrSYiC5eYb/IUUgA== X-IronPort-AV: E=McAfee;i="6700,10204,11404"; a="46275700" X-IronPort-AV: E=Sophos;i="6.15,213,1739865600"; d="scan'208";a="46275700" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by orvoesa109.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Apr 2025 10:29:23 -0700 X-CSE-ConnectionGUID: PYBh1ZCGTU2zmhbSXP+jXA== X-CSE-MsgGUID: HPpz6voRQJS6rGy8NeyGIw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,213,1739865600"; d="scan'208";a="130729718" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa010.fm.intel.com with ESMTP; 15 Apr 2025 10:29:19 -0700 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Cc: Alexander Lobakin , Michal Kubiak , Maciej Fijalkowski , Tony Nguyen , Przemek Kitszel , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Simon Horman , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH iwl-next 07/16] libeth: xdp: add XDPSQ cleanup timers Date: Tue, 15 Apr 2025 19:28:16 +0200 Message-ID: <20250415172825.3731091-8-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250415172825.3731091-1-aleksander.lobakin@intel.com> References: <20250415172825.3731091-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org When XDP Tx queues are not interrupt-driven but use lazy cleaning, i.e. only when there are less than `threshold` free descriptors left, we also need cleanup timers to avoid &xdp_buff and &xdp_frame stall for too long, especially with Page Pool (it warns every about inflight pages every 60 second). Let's say we sent 256 frames and don't need to send more, but we clean only when the number of pending items >= 384. In that case, those 256 will stall until 128 more are sent. For this, add simple helpers to run a timer which will clean the queue regardless, after 1 second of the last send. The timer is triggered when finalizing the queue. As long as there is regular active traffic, the timer doesn't fire. Signed-off-by: Alexander Lobakin --- include/net/libeth/types.h | 21 ++++++++- include/net/libeth/xdp.h | 57 +++++++++++++++++++++++++ drivers/net/ethernet/intel/libeth/xdp.c | 23 ++++++++++ 3 files changed, 100 insertions(+), 1 deletion(-) diff --git a/include/net/libeth/types.h b/include/net/libeth/types.h index abfccae1a346..4df703a9eb59 100644 --- a/include/net/libeth/types.h +++ b/include/net/libeth/types.h @@ -4,7 +4,7 @@ #ifndef __LIBETH_TYPES_H #define __LIBETH_TYPES_H -#include +#include /** * struct libeth_sq_napi_stats - "hot" counters to update in Tx completion loop @@ -60,4 +60,23 @@ struct libeth_xdpsq_lock { bool share; }; +/* XDPSQ clean-up timers */ + +/** + * struct libeth_xdpsq_timer - timer for cleaning up XDPSQs w/o interrupts + * @xdpsq: queue this timer belongs to + * @lock: lock for the queue + * @dwork: work performing cleanups + * + * XDPSQs not using interrupts but lazy cleaning, i.e. only when there's no + * space for sending the current queued frame/bulk, must fire up timers to + * make sure there are no stale buffers to free. + */ +struct libeth_xdpsq_timer { + void *xdpsq; + struct libeth_xdpsq_lock *lock; + + struct delayed_work dwork; +}; + #endif /* __LIBETH_TYPES_H */ diff --git a/include/net/libeth/xdp.h b/include/net/libeth/xdp.h index 47ec5c59a586..54cf1e7cc1fc 100644 --- a/include/net/libeth/xdp.h +++ b/include/net/libeth/xdp.h @@ -177,6 +177,63 @@ static inline void libeth_xdpsq_unlock(struct libeth_xdpsq_lock *lock) __libeth_xdpsq_unlock(lock); } +/* XDPSQ clean-up timers */ + +void libeth_xdpsq_init_timer(struct libeth_xdpsq_timer *timer, void *xdpsq, + struct libeth_xdpsq_lock *lock, + void (*poll)(struct work_struct *work)); + +/** + * libeth_xdpsq_deinit_timer - deinitialize &libeth_xdpsq_timer + * @timer: timer to deinitialize + * + * Flush and disable the underlying workqueue. + */ +static inline void libeth_xdpsq_deinit_timer(struct libeth_xdpsq_timer *timer) +{ + cancel_delayed_work_sync(&timer->dwork); +} + +/** + * libeth_xdpsq_queue_timer - run &libeth_xdpsq_timer + * @timer: timer to queue + * + * Should be called after the queue was filled and the transmission was run + * to complete the pending buffers if no further sending will be done in a + * second (-> lazy cleaning won't happen). + * If the timer was already run, it will be requeued back to one second + * timeout again. + */ +static inline void libeth_xdpsq_queue_timer(struct libeth_xdpsq_timer *timer) +{ + mod_delayed_work_on(raw_smp_processor_id(), system_bh_highpri_wq, + &timer->dwork, HZ); +} + +/** + * libeth_xdpsq_run_timer - wrapper to run a queue clean-up on a timer event + * @work: workqueue belonging to the corresponding timer + * @poll: driver-specific completion queue poll function + * + * Run the polling function on the locked queue and requeue the timer if + * there's more work to do. + * Designed to be used via LIBETH_XDP_DEFINE_TIMER() below. + */ +static __always_inline void +libeth_xdpsq_run_timer(struct work_struct *work, + u32 (*poll)(void *xdpsq, u32 budget)) +{ + struct libeth_xdpsq_timer *timer = container_of(work, typeof(*timer), + dwork.work); + + libeth_xdpsq_lock(timer->lock); + + if (poll(timer->xdpsq, U32_MAX)) + libeth_xdpsq_queue_timer(timer); + + libeth_xdpsq_unlock(timer->lock); +} + /* Common Tx bits */ /** diff --git a/drivers/net/ethernet/intel/libeth/xdp.c b/drivers/net/ethernet/intel/libeth/xdp.c index 0c0aa3b1d49d..25be680de627 100644 --- a/drivers/net/ethernet/intel/libeth/xdp.c +++ b/drivers/net/ethernet/intel/libeth/xdp.c @@ -54,6 +54,29 @@ __libeth_xdpsq_unlock(struct libeth_xdpsq_lock *lock) } EXPORT_SYMBOL_GPL(__libeth_xdpsq_unlock); +/* XDPSQ clean-up timers */ + +/** + * libeth_xdpsq_init_timer - initialize an XDPSQ clean-up timer + * @timer: timer to initialize + * @xdpsq: queue this timer belongs to + * @lock: corresponding XDPSQ lock + * @poll: queue polling/completion function + * + * XDPSQ clean-up timers must be set up before using at the queue configuration + * time. Set the required pointers and the cleaning callback. + */ +void libeth_xdpsq_init_timer(struct libeth_xdpsq_timer *timer, void *xdpsq, + struct libeth_xdpsq_lock *lock, + void (*poll)(struct work_struct *work)) +{ + timer->xdpsq = xdpsq; + timer->lock = lock; + + INIT_DELAYED_WORK(&timer->dwork, poll); +} +EXPORT_SYMBOL_GPL(libeth_xdpsq_init_timer); + /* ``XDP_TX`` bulking */ static void __cold From patchwork Tue Apr 15 17:28:17 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 14052494 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ADC79221548; Tue, 15 Apr 2025 17:29:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.17 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744738172; cv=none; b=oOmlHmj2OJNOw+3cOIS/guz4F6GMp8VUbr4dsD0YScKdMrjRvflxGkfvUyAja3BcwjdhVjOFa63OpY6qkkdbn/sqh5poukGQzWj76R0xGzTJW4iRJuHFLzXf91uEbSankjlrKZt/X0qaFbENE1DK3YHw/Bh5N7q56ab8fuVwsJI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744738172; c=relaxed/simple; bh=Y9dVgg5kD0QtEYGYSr3pkltx/82Vqvg5iLnymUVyNlk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=rZtDfWDFSqpbpfKlqqenaKpKbdtLt/Mtx7EmUPje8383DtYtqcggz6AmE1XpqXe/8hk3PJQSRcqBPo4N7vgoyqTRfyXRY4GNYgHYS3otpNdd0tXS1hQRBhPZmQ0yvBwPzXn7d72TD5c5fJmZ6OI+DE3rhqBg8P6hOARaFrHvjIY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=OKNgbL8k; arc=none smtp.client-ip=198.175.65.17 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="OKNgbL8k" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1744738171; x=1776274171; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Y9dVgg5kD0QtEYGYSr3pkltx/82Vqvg5iLnymUVyNlk=; b=OKNgbL8k5ymt/m577E6QlBb8i6NJ5H6dt2R7OMjZbvQtyESEmAZ/Xi0m QdqZreOKHUgEhICpbdKzAt9qNDoONF6avsOJSZISkFFciyk6ZWEz+F7iU QHFLy5tApLsh9tcpFlFxWWx8oqaXCjOhTHBt+laYM1+dczUCOduEGTjkx QajUdi+gF1SzFu4yIiRAHMswvjpe8MgP6DT0Xs4laT5N1J8OkBw539m9o D4FqoSAwZ+rRmoZ85UQkCbejeYnuQ+s87FAMWamKEwRrLlDKDRIw+aIek 5u+eAe6HN3MYLty0FinhOrwAUKWl75Pab1co2NemW62K2nZ3z88YRs+7w A==; X-CSE-ConnectionGUID: 67tjNzBMRmaR9ScM8g8Kdg== X-CSE-MsgGUID: ycittdGQSXi7KgANICr/FQ== X-IronPort-AV: E=McAfee;i="6700,10204,11404"; a="46275726" X-IronPort-AV: E=Sophos;i="6.15,213,1739865600"; d="scan'208";a="46275726" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by orvoesa109.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Apr 2025 10:29:27 -0700 X-CSE-ConnectionGUID: Q5yDxASCQlyldvAVCZwBxg== X-CSE-MsgGUID: gljhbGdbSeGJrefdyNpTWQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,213,1739865600"; d="scan'208";a="130729732" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa010.fm.intel.com with ESMTP; 15 Apr 2025 10:29:23 -0700 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Cc: Alexander Lobakin , Michal Kubiak , Maciej Fijalkowski , Tony Nguyen , Przemek Kitszel , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Simon Horman , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH iwl-next 08/16] libeth: xdp: add helpers for preparing/processing &libeth_xdp_buff Date: Tue, 15 Apr 2025 19:28:17 +0200 Message-ID: <20250415172825.3731091-9-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250415172825.3731091-1-aleksander.lobakin@intel.com> References: <20250415172825.3731091-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Add convenience helpers to build an &xdp_buff. This means: general initialization before the NAPI loop, adding head, adding frags etc. libeth_xdp_process_buff() is the same what everybody have in their drivers: dma_sync_for_cpu(); if (!frag) { add_head(); prefetch(); } else { add_frag(); } Note that I don't use net_prefetch(), sticking to the original prefetch(). In none of my tests prefetching 128 bytes yielded better perf than 64 bytes. That might differ if the headers are huge enough, but then additional tunneling etc. overhead takes place, you either way won't win a lot. &libeth_xdp_stash is for cases when you exit the polling loop without finishing building the buff. If that happens, you need to store the buffer in the queue structure until the next loop and then restore it. It makes no sense to place a whole full &xdp_buff there. Define a minimal structure, which would store only the fields essential to restore it. I was able to pack it into 16 bytes, which is only 8 bytes bigger than `struct sk_buff *skb` on x64. Signed-off-by: Alexander Lobakin --- include/net/libeth/types.h | 23 ++++ include/net/libeth/xdp.h | 151 ++++++++++++++++++++++++ drivers/net/ethernet/intel/libeth/xdp.c | 90 ++++++++++++++ 3 files changed, 264 insertions(+) diff --git a/include/net/libeth/types.h b/include/net/libeth/types.h index 4df703a9eb59..7b27c1966d45 100644 --- a/include/net/libeth/types.h +++ b/include/net/libeth/types.h @@ -79,4 +79,27 @@ struct libeth_xdpsq_timer { struct delayed_work dwork; }; +/* Rx polling path */ + +/** + * struct libeth_xdp_buff_stash - struct for stashing &xdp_buff onto a queue + * @data: pointer to the start of the frame, xdp_buff.data + * @headroom: frame headroom, xdp_buff.data - xdp_buff.data_hard_start + * @len: frame linear space length, xdp_buff.data_end - xdp_buff.data + * @frame_sz: truesize occupied by the frame, xdp_buff.frame_sz + * @flags: xdp_buff.flags + * + * &xdp_buff is 56 bytes long on x64, &libeth_xdp_buff is 64 bytes. This + * structure carries only necessary fields to save/restore a partially built + * frame on the queue structure to finish it during the next NAPI poll. + */ +struct libeth_xdp_buff_stash { + void *data; + u16 headroom; + u16 len; + + u32 frame_sz:24; + u32 flags:8; +} __aligned_largest; + #endif /* __LIBETH_TYPES_H */ diff --git a/include/net/libeth/xdp.h b/include/net/libeth/xdp.h index 54cf1e7cc1fc..8149eba93af7 100644 --- a/include/net/libeth/xdp.h +++ b/include/net/libeth/xdp.h @@ -60,6 +60,42 @@ static_assert(offsetof(struct libeth_xdp_buff, desc) == static_assert(IS_ALIGNED(sizeof(struct xdp_buff_xsk), __alignof(struct libeth_xdp_buff))); +/** + * __LIBETH_XDP_ONSTACK_BUFF - declare a &libeth_xdp_buff on the stack + * @name: name of the variable to declare + * @...: sizeof() of the driver-private data + */ +#define __LIBETH_XDP_ONSTACK_BUFF(name, ...) \ + ___LIBETH_XDP_ONSTACK_BUFF(name, ##__VA_ARGS__) +/** + * LIBETH_XDP_ONSTACK_BUFF - declare a &libeth_xdp_buff on the stack + * @name: name of the variable to declare + * @...: type or variable name of the driver-private data + */ +#define LIBETH_XDP_ONSTACK_BUFF(name, ...) \ + __LIBETH_XDP_ONSTACK_BUFF(name, __libeth_xdp_priv_sz(__VA_ARGS__)) + +#define ___LIBETH_XDP_ONSTACK_BUFF(name, ...) \ + _DEFINE_FLEX(struct libeth_xdp_buff, name, priv, \ + LIBETH_XDP_PRIV_SZ(__VA_ARGS__ + 0), \ + /* no init */); \ + LIBETH_XDP_ASSERT_PRIV_SZ(__VA_ARGS__ + 0) + +#define __libeth_xdp_priv_sz(...) \ + CONCATENATE(__libeth_xdp_psz, COUNT_ARGS(__VA_ARGS__))(__VA_ARGS__) + +#define __libeth_xdp_psz0(...) +#define __libeth_xdp_psz1(...) sizeof(__VA_ARGS__) + +#define LIBETH_XDP_PRIV_SZ(sz) \ + (ALIGN(sz, __alignof(struct libeth_xdp_buff)) / sizeof(long)) + +/* Performs XSK_CHECK_PRIV_TYPE() */ +#define LIBETH_XDP_ASSERT_PRIV_SZ(sz) \ + static_assert(offsetofend(struct xdp_buff_xsk, cb) >= \ + struct_size_t(struct libeth_xdp_buff, priv, \ + LIBETH_XDP_PRIV_SZ(sz))) + /* XDPSQ sharing */ DECLARE_STATIC_KEY_FALSE(libeth_xdpsq_share); @@ -956,6 +992,65 @@ __libeth_xdp_xmit_do_bulk(struct libeth_xdp_tx_bulk *bq, /* Rx polling path */ +void libeth_xdp_load_stash(struct libeth_xdp_buff *dst, + const struct libeth_xdp_buff_stash *src); +void libeth_xdp_save_stash(struct libeth_xdp_buff_stash *dst, + const struct libeth_xdp_buff *src); +void __libeth_xdp_return_stash(struct libeth_xdp_buff_stash *stash); + +/** + * libeth_xdp_init_buff - initialize a &libeth_xdp_buff for Rx NAPI poll + * @dst: onstack buffer to initialize + * @src: XDP buffer stash placed on the queue + * @rxq: registered &xdp_rxq_info corresponding to this queue + * + * Should be called before the main NAPI polling loop. Loads the content of + * the previously saved stash or initializes the buffer from scratch. + */ +static inline void +libeth_xdp_init_buff(struct libeth_xdp_buff *dst, + const struct libeth_xdp_buff_stash *src, + struct xdp_rxq_info *rxq) +{ + if (likely(!src->data)) + dst->data = NULL; + else + libeth_xdp_load_stash(dst, src); + + dst->base.rxq = rxq; +} + +/** + * libeth_xdp_save_buff - save a partially built buffer on a queue + * @dst: XDP buffer stash placed on the queue + * @src: onstack buffer to save + * + * Should be called after the main NAPI polling loop. If the loop exited before + * the buffer was finished, saves its content on the queue, so that it can be + * completed during the next poll. Otherwise, clears the stash. + */ +static inline void libeth_xdp_save_buff(struct libeth_xdp_buff_stash *dst, + const struct libeth_xdp_buff *src) +{ + if (likely(!src->data)) + dst->data = NULL; + else + libeth_xdp_save_stash(dst, src); +} + +/** + * libeth_xdp_return_stash - free an XDP buffer stash from a queue + * @stash: stash to free + * + * If the queue is about to be destroyed, but it still has an incompleted + * buffer stash, this helper should be called to free it. + */ +static inline void libeth_xdp_return_stash(struct libeth_xdp_buff_stash *stash) +{ + if (stash->data) + __libeth_xdp_return_stash(stash); +} + static inline void libeth_xdp_return_va(const void *data, bool napi) { netmem_ref netmem = virt_to_netmem(data); @@ -997,6 +1092,62 @@ static inline void __libeth_xdp_return_buff(struct libeth_xdp_buff *xdp, xdp->data = NULL; } +bool libeth_xdp_buff_add_frag(struct libeth_xdp_buff *xdp, + const struct libeth_fqe *fqe, + u32 len); + +/** + * libeth_xdp_prepare_buff - fill &libeth_xdp_buff with head FQE data + * @xdp: XDP buffer to attach the head to + * @fqe: FQE containing the head buffer + * @len: buffer len passed from HW + * + * Internal, use libeth_xdp_process_buff() instead. Initializes XDP buffer + * head with the Rx buffer data: data pointer, length, headroom, and + * truesize/tailroom. Zeroes the flags. + */ +static inline void libeth_xdp_prepare_buff(struct libeth_xdp_buff *xdp, + const struct libeth_fqe *fqe, + u32 len) +{ + const struct page *page = __netmem_to_page(fqe->netmem); + + xdp_init_buff(&xdp->base, fqe->truesize, xdp->base.rxq); + xdp_prepare_buff(&xdp->base, page_address(page) + fqe->offset, + page->pp->p.offset, len, true); +} + +/** + * libeth_xdp_process_buff - attach Rx buffer to &libeth_xdp_buff + * @xdp: XDP buffer to attach the Rx buffer to + * @fqe: Rx buffer to process + * @len: received data length from the descriptor + * + * If the XDP buffer is empty, attaches the Rx buffer as head and initializes + * the required fields. Otherwise, attaches the buffer as a frag. + * Already performs DMA sync-for-CPU and frame start prefetch + * (for head buffers only). + * + * Return: true on success, false if the descriptor must be skipped (empty or + * no space for a new frag). + */ +static inline bool libeth_xdp_process_buff(struct libeth_xdp_buff *xdp, + const struct libeth_fqe *fqe, + u32 len) +{ + if (!libeth_rx_sync_for_cpu(fqe, len)) + return false; + + if (xdp->data) + return libeth_xdp_buff_add_frag(xdp, fqe, len); + + libeth_xdp_prepare_buff(xdp, fqe, len); + + prefetch(xdp->data); + + return true; +} + /* Tx buffer completion */ void libeth_xdp_return_buff_bulk(const struct skb_shared_info *sinfo, diff --git a/drivers/net/ethernet/intel/libeth/xdp.c b/drivers/net/ethernet/intel/libeth/xdp.c index 25be680de627..ca87789178ed 100644 --- a/drivers/net/ethernet/intel/libeth/xdp.c +++ b/drivers/net/ethernet/intel/libeth/xdp.c @@ -172,6 +172,64 @@ EXPORT_SYMBOL_GPL(libeth_xdp_xmit_return_bulk); /* Rx polling path */ +/** + * libeth_xdp_load_stash - recreate an &xdp_buff from libeth_xdp buffer stash + * @dst: target &libeth_xdp_buff to initialize + * @src: source stash + * + * External helper used by libeth_xdp_init_buff(), do not call directly. + * Recreate an onstack &libeth_xdp_buff using the stash saved earlier. + * The only field untouched (rxq) is initialized later in the + * abovementioned function. + */ +void libeth_xdp_load_stash(struct libeth_xdp_buff *dst, + const struct libeth_xdp_buff_stash *src) +{ + dst->data = src->data; + dst->base.data_end = src->data + src->len; + dst->base.data_meta = src->data; + dst->base.data_hard_start = src->data - src->headroom; + + dst->base.frame_sz = src->frame_sz; + dst->base.flags = src->flags; +} +EXPORT_SYMBOL_GPL(libeth_xdp_load_stash); + +/** + * libeth_xdp_save_stash - convert &xdp_buff to a libeth_xdp buffer stash + * @dst: target &libeth_xdp_buff_stash to initialize + * @src: source XDP buffer + * + * External helper used by libeth_xdp_save_buff(), do not call directly. + * Use the fields from the passed XDP buffer to initialize the stash on the + * queue, so that a partially received frame can be finished later during + * the next NAPI poll. + */ +void libeth_xdp_save_stash(struct libeth_xdp_buff_stash *dst, + const struct libeth_xdp_buff *src) +{ + dst->data = src->data; + dst->headroom = src->data - src->base.data_hard_start; + dst->len = src->base.data_end - src->data; + + dst->frame_sz = src->base.frame_sz; + dst->flags = src->base.flags; + + WARN_ON_ONCE(dst->flags != src->base.flags); +} +EXPORT_SYMBOL_GPL(libeth_xdp_save_stash); + +void __libeth_xdp_return_stash(struct libeth_xdp_buff_stash *stash) +{ + LIBETH_XDP_ONSTACK_BUFF(xdp); + + libeth_xdp_load_stash(xdp, stash); + libeth_xdp_return_buff_slow(xdp); + + stash->data = NULL; +} +EXPORT_SYMBOL_GPL(__libeth_xdp_return_stash); + /** * libeth_xdp_return_buff_slow - free &libeth_xdp_buff * @xdp: buffer to free/return @@ -185,6 +243,38 @@ void __cold libeth_xdp_return_buff_slow(struct libeth_xdp_buff *xdp) } EXPORT_SYMBOL_GPL(libeth_xdp_return_buff_slow); +/** + * libeth_xdp_buff_add_frag - add frag to XDP buffer + * @xdp: head XDP buffer + * @fqe: Rx buffer containing the frag + * @len: frag length reported by HW + * + * External helper used by libeth_xdp_process_buff(), do not call directly. + * Frees both head and frag buffers on error. + * + * Return: true success, false on error (no space for a new frag). + */ +bool libeth_xdp_buff_add_frag(struct libeth_xdp_buff *xdp, + const struct libeth_fqe *fqe, + u32 len) +{ + netmem_ref netmem = fqe->netmem; + + if (!xdp_buff_add_frag(&xdp->base, netmem, + fqe->offset + netmem_get_pp(netmem)->p.offset, + len, fqe->truesize)) + goto recycle; + + return true; + +recycle: + libeth_rx_recycle_slow(netmem); + libeth_xdp_return_buff_slow(xdp); + + return false; +} +EXPORT_SYMBOL_GPL(libeth_xdp_buff_add_frag); + /* Tx buffer completion */ static void libeth_xdp_put_netmem_bulk(netmem_ref netmem, From patchwork Tue Apr 15 17:28:18 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 14052495 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D063E22E410; Tue, 15 Apr 2025 17:29:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.17 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744738175; cv=none; b=DKCWqKPa44JHNdiwBSN3I3pYv5FP2YA+tLDzIy3qp2RQc17O9vFZCwmYG4k+qvy7/uFN3qIvUaTqfpN2EmU4eb8hmByQctC9hO5GoddHD6K1Rjp3gS7L3wBZKXtsEBmAH61+PwLKDoclalVPiW8TR046yWCUkX7WiQnmqhdcE2E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744738175; c=relaxed/simple; bh=of0YRTFuAQn/7/zvtvjfylUhu/xHILrAermxsYUpKxs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Jdd8qNN4kyfSNo4ydROpuAZ5KK6MljvAdmIpCD88o5JuVZ4PN75SSM29EFRWXas+VqDM5N2jbFlPX5z71Vl2BG1bm9KeA4WRf0306Dy3eE4IgJ4sP2yJ56cabaO8Ox5eKTboNxiVAniktmnbY7hC6zFE1cEfGQ5VV/ToeQ4bf6Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=CcGMM6x1; arc=none smtp.client-ip=198.175.65.17 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="CcGMM6x1" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1744738173; x=1776274173; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=of0YRTFuAQn/7/zvtvjfylUhu/xHILrAermxsYUpKxs=; b=CcGMM6x1Kml4M9Eozu8aVmFr1E/JIst+Hyhbox/R0hAF+g3R5kNv5API zBSCgWNZe5wtRzG6NahSlxrPao6ajY2c8guRXRgAiCHbsZ1tWW84mNm+q /w2yoCuYgTLNj0Fy9mdvko1hleU77wJ7BhpVvUpqllKMGJzpKRUEe2iqJ CHVl1av/QGnwRCLAJ+A5e6ww760zZAhk5tjeMEv0rsXkxzRs4Z6tScrKH ng2Xo727LGCYqBI/22poap0EVAHvRO/oHcdFSv07TvjlPTriLkk/QC0xp /hyk5L26GlxEGB0UVk/7NROh1DtwhU0GbYcwUXnoMEyWi94hJfO5NtXAQ g==; X-CSE-ConnectionGUID: emlBPKK1SsWd6aAsH9Y/sA== X-CSE-MsgGUID: C/+idnsKTDahbbVyj/YkOA== X-IronPort-AV: E=McAfee;i="6700,10204,11404"; a="46275738" X-IronPort-AV: E=Sophos;i="6.15,213,1739865600"; d="scan'208";a="46275738" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by orvoesa109.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Apr 2025 10:29:31 -0700 X-CSE-ConnectionGUID: LEfbGvmpQb+qhQIrK7J77A== X-CSE-MsgGUID: dONpqdk2R1OYpogzJ5zFhA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,213,1739865600"; d="scan'208";a="130729791" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa010.fm.intel.com with ESMTP; 15 Apr 2025 10:29:27 -0700 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Cc: Alexander Lobakin , Michal Kubiak , Maciej Fijalkowski , Tony Nguyen , Przemek Kitszel , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Simon Horman , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH iwl-next 09/16] libeth: xdp: add XDP prog run and verdict result handling Date: Tue, 15 Apr 2025 19:28:18 +0200 Message-ID: <20250415172825.3731091-10-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250415172825.3731091-1-aleksander.lobakin@intel.com> References: <20250415172825.3731091-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Running a prog and handling the verdicts, up to napi_gro_receive() is also pretty generic code not really differing between vendors (except for Tx descriptor filling and Rx descriptor parsing). Define a couple inlines to do that. The inline callbacks a driver needs to pass is mentioned above: Tx descriptor filling for XDP_TX, populating skb with the descriptor data for XDP_PASS, finalizing XDPSQs after the polling loop for XDP_TX (kicking the HW to start sending). The populate callback passes only &libeth_xdp_buff assuming buff::desc pointer is enough, plus you can always get the corresponding Rx queue structure via container_of(buff::rxq). If not, a driver can extend the buff with more fields directly on the stack without touching libeth_xdp definitions. Signed-off-by: Alexander Lobakin --- include/net/libeth/types.h | 22 ++ include/net/libeth/xdp.h | 281 ++++++++++++++++++++++++ drivers/net/ethernet/intel/libeth/xdp.c | 27 +++ 3 files changed, 330 insertions(+) diff --git a/include/net/libeth/types.h b/include/net/libeth/types.h index 7b27c1966d45..cf1d78a9dc38 100644 --- a/include/net/libeth/types.h +++ b/include/net/libeth/types.h @@ -6,6 +6,28 @@ #include +/* Stats */ + +/** + * struct libeth_rq_napi_stats - "hot" counters to update in Rx polling loop + * @packets: received frames counter + * @bytes: sum of bytes of received frames above + * @fragments: sum of fragments of received S/G frames + * @hsplit: number of frames the device performed the header split for + * @raw: alias to access all the fields as an array + */ +struct libeth_rq_napi_stats { + union { + struct { + u32 packets; + u32 bytes; + u32 fragments; + u32 hsplit; + }; + DECLARE_FLEX_ARRAY(u32, raw); + }; +}; + /** * struct libeth_sq_napi_stats - "hot" counters to update in Tx completion loop * @packets: completed frames counter diff --git a/include/net/libeth/xdp.h b/include/net/libeth/xdp.h index 8149eba93af7..104210da921e 100644 --- a/include/net/libeth/xdp.h +++ b/include/net/libeth/xdp.h @@ -20,6 +20,7 @@ enum { LIBETH_XDP_DROP = BIT(0), LIBETH_XDP_ABORTED = BIT(1), LIBETH_XDP_TX = BIT(2), + LIBETH_XDP_REDIRECT = BIT(3), }; /* @@ -353,6 +354,7 @@ static_assert(offsetof(struct libeth_xdp_tx_frame, frag.len) == * @prog: corresponding active XDP program, %NULL for .ndo_xdp_xmit() * @dev: &net_device which the frames are transmitted on * @xdpsq: shortcut to the corresponding driver-specific XDPSQ structure + * @act_mask: Rx only, mask of all the XDP prog verdicts for that NAPI session * @count: current number of frames in @bulk * @bulk: array of queued frames for bulk Tx * @@ -366,6 +368,7 @@ struct libeth_xdp_tx_bulk { struct net_device *dev; void *xdpsq; + u32 act_mask; u32 count; struct libeth_xdp_tx_frame bulk[LIBETH_XDP_TX_BULK]; } __aligned(sizeof(struct libeth_xdp_tx_frame)); @@ -992,6 +995,40 @@ __libeth_xdp_xmit_do_bulk(struct libeth_xdp_tx_bulk *bq, /* Rx polling path */ +/** + * libeth_xdp_tx_init_bulk - initialize an XDP Tx bulk for Rx NAPI poll + * @bq: bulk to initialize + * @prog: RCU pointer to the XDP program (can be %NULL) + * @dev: target &net_device + * @xdpsqs: array of driver XDPSQ structs + * @num: number of active XDPSQs, the above array length + * + * Should be called on an onstack XDP Tx bulk before the NAPI polling loop. + * Initializes all the needed fields to run libeth_xdp functions. If @num == 0, + * assumes XDP is not enabled. + */ +#define libeth_xdp_tx_init_bulk(bq, prog, dev, xdpsqs, num) \ + __libeth_xdp_tx_init_bulk(bq, prog, dev, xdpsqs, num, false, \ + __UNIQUE_ID(bq_), __UNIQUE_ID(nqs_)) + +#define __libeth_xdp_tx_init_bulk(bq, pr, d, xdpsqs, num, ub, un) do { \ + typeof(bq) ub = (bq); \ + u32 un = (num); \ + \ + rcu_read_lock(); \ + \ + if (un) { \ + ub->prog = rcu_dereference(pr); \ + ub->dev = (d); \ + ub->xdpsq = (xdpsqs)[libeth_xdpsq_id(un)]; \ + } else { \ + ub->prog = NULL; \ + } \ + \ + ub->act_mask = 0; \ + ub->count = 0; \ +} while (0) + void libeth_xdp_load_stash(struct libeth_xdp_buff *dst, const struct libeth_xdp_buff_stash *src); void libeth_xdp_save_stash(struct libeth_xdp_buff_stash *dst, @@ -1148,6 +1185,250 @@ static inline bool libeth_xdp_process_buff(struct libeth_xdp_buff *xdp, return true; } +/** + * libeth_xdp_buff_stats_frags - update onstack RQ stats with XDP frags info + * @ss: onstack stats to update + * @xdp: buffer to account + * + * Internal helper used by __libeth_xdp_run_pass(), do not call directly. + * Adds buffer's frags count and total len to the onstack stats. + */ +static inline void +libeth_xdp_buff_stats_frags(struct libeth_rq_napi_stats *ss, + const struct libeth_xdp_buff *xdp) +{ + const struct skb_shared_info *sinfo; + + sinfo = xdp_get_shared_info_from_buff(&xdp->base); + ss->bytes += sinfo->xdp_frags_size; + ss->fragments += sinfo->nr_frags + 1; +} + +u32 libeth_xdp_prog_exception(const struct libeth_xdp_tx_bulk *bq, + struct libeth_xdp_buff *xdp, + enum xdp_action act, int ret); + +/** + * __libeth_xdp_run_prog - run XDP program on an XDP buffer + * @xdp: XDP buffer to run the prog on + * @bq: buffer bulk for ``XDP_TX`` queueing + * + * Internal inline abstraction to run XDP program. Handles ``XDP_DROP`` + * and ``XDP_REDIRECT`` only, the rest is processed levels up. + * Reports an XDP prog exception on errors. + * + * Return: libeth_xdp prog verdict depending on the prog's verdict. + */ +static __always_inline u32 +__libeth_xdp_run_prog(struct libeth_xdp_buff *xdp, + const struct libeth_xdp_tx_bulk *bq) +{ + enum xdp_action act; + + act = bpf_prog_run_xdp(bq->prog, &xdp->base); + if (unlikely(act < XDP_DROP || act > XDP_REDIRECT)) + goto out; + + switch (act) { + case XDP_PASS: + return LIBETH_XDP_PASS; + case XDP_DROP: + libeth_xdp_return_buff(xdp); + + return LIBETH_XDP_DROP; + case XDP_TX: + return LIBETH_XDP_TX; + case XDP_REDIRECT: + if (unlikely(xdp_do_redirect(bq->dev, &xdp->base, bq->prog))) + break; + + xdp->data = NULL; + + return LIBETH_XDP_REDIRECT; + default: + break; + } + +out: + return libeth_xdp_prog_exception(bq, xdp, act, 0); +} + +/** + * __libeth_xdp_run_flush - run XDP program and handle ``XDP_TX`` verdict + * @xdp: XDP buffer to run the prog on + * @bq: buffer bulk for ``XDP_TX`` queueing + * @run: internal callback for running XDP program + * @queue: internal callback for queuing ``XDP_TX`` frame + * @flush_bulk: driver callback for flushing a bulk + * + * Internal inline abstraction to run XDP program and additionally handle + * ``XDP_TX`` verdict. + * Do not use directly. + * + * Return: libeth_xdp prog verdict depending on the prog's verdict. + */ +static __always_inline u32 +__libeth_xdp_run_flush(struct libeth_xdp_buff *xdp, + struct libeth_xdp_tx_bulk *bq, + u32 (*run)(struct libeth_xdp_buff *xdp, + const struct libeth_xdp_tx_bulk *bq), + bool (*queue)(struct libeth_xdp_tx_bulk *bq, + struct libeth_xdp_buff *xdp, + bool (*flush_bulk) + (struct libeth_xdp_tx_bulk *bq, + u32 flags)), + bool (*flush_bulk)(struct libeth_xdp_tx_bulk *bq, + u32 flags)) +{ + u32 act; + + act = run(xdp, bq); + if (act == LIBETH_XDP_TX && unlikely(!queue(bq, xdp, flush_bulk))) + act = LIBETH_XDP_DROP; + + bq->act_mask |= act; + + return act; +} + +/** + * libeth_xdp_run_prog - run XDP program and handle all verdicts + * @xdp: XDP buffer to process + * @bq: XDP Tx bulk to queue ``XDP_TX`` buffers + * @fl: driver ``XDP_TX`` bulk flush callback + * + * Run the attached XDP program and handle all possible verdicts. + * + * Return: true if the buffer should be passed up the stack, false if the poll + * should go to the next buffer. + */ +#define libeth_xdp_run_prog(xdp, bq, fl) \ + (__libeth_xdp_run_flush(xdp, bq, __libeth_xdp_run_prog, \ + libeth_xdp_tx_queue_bulk, \ + fl) == LIBETH_XDP_PASS) + +/** + * __libeth_xdp_run_pass - helper to run XDP program and handle the result + * @xdp: XDP buffer to process + * @bq: XDP Tx bulk to queue ``XDP_TX`` frames + * @napi: NAPI to build an skb and pass it up the stack + * @rs: onstack libeth RQ stats + * @md: metadata that should be filled to the XDP buffer + * @prep: callback for filling the metadata + * @run: driver wrapper to run XDP program + * @populate: driver callback to populate an skb with the HW descriptor data + * + * Inline abstraction that does the following: + * 1) adds frame size and frag number (if needed) to the onstack stats; + * 2) fills the descriptor metadata to the onstack &libeth_xdp_buff + * 3) runs XDP program if present; + * 4) handles all possible verdicts; + * 5) on ``XDP_PASS`, builds an skb from the buffer; + * 6) populates it with the descriptor metadata; + * 7) passes it up the stack. + * + * In most cases, number 2 means just writing the pointer to the HW descriptor + * to the XDP buffer. If so, please use LIBETH_XDP_DEFINE_RUN{,_PASS}() + * wrappers to build a driver function. + */ +static __always_inline void +__libeth_xdp_run_pass(struct libeth_xdp_buff *xdp, + struct libeth_xdp_tx_bulk *bq, struct napi_struct *napi, + struct libeth_rq_napi_stats *rs, const void *md, + void (*prep)(struct libeth_xdp_buff *xdp, + const void *md), + bool (*run)(struct libeth_xdp_buff *xdp, + struct libeth_xdp_tx_bulk *bq), + bool (*populate)(struct sk_buff *skb, + const struct libeth_xdp_buff *xdp, + struct libeth_rq_napi_stats *rs)) +{ + struct sk_buff *skb; + + rs->bytes += xdp->base.data_end - xdp->data; + rs->packets++; + + if (xdp_buff_has_frags(&xdp->base)) + libeth_xdp_buff_stats_frags(rs, xdp); + + if (prep && (!__builtin_constant_p(!!md) || md)) + prep(xdp, md); + + if (!bq || !run || !bq->prog) + goto build; + + if (!run(xdp, bq)) + return; + +build: + skb = xdp_build_skb_from_buff(&xdp->base); + if (unlikely(!skb)) { + libeth_xdp_return_buff_slow(xdp); + return; + } + + xdp->data = NULL; + + if (unlikely(!populate(skb, xdp, rs))) { + napi_consume_skb(skb, true); + return; + } + + napi_gro_receive(napi, skb); +} + +static inline void libeth_xdp_prep_desc(struct libeth_xdp_buff *xdp, + const void *desc) +{ + xdp->desc = desc; +} + +/** + * libeth_xdp_run_pass - helper to run XDP program and handle the result + * @xdp: XDP buffer to process + * @bq: XDP Tx bulk to queue ``XDP_TX`` frames + * @napi: NAPI to build an skb and pass it up the stack + * @ss: onstack libeth RQ stats + * @desc: pointer to the HW descriptor for that frame + * @run: driver wrapper to run XDP program + * @populate: driver callback to populate an skb with the HW descriptor data + * + * Wrapper around the underscored version when "fill the descriptor metadata" + * means just writing the pointer to the HW descriptor as @xdp->desc. + */ +#define libeth_xdp_run_pass(xdp, bq, napi, ss, desc, run, populate) \ + __libeth_xdp_run_pass(xdp, bq, napi, ss, desc, libeth_xdp_prep_desc, \ + run, populate) + +/** + * libeth_xdp_finalize_rx - finalize XDPSQ after a NAPI polling loop + * @bq: ``XDP_TX`` frame bulk + * @flush: driver callback to flush the bulk + * @finalize: driver callback to start sending the frames and run the timer + * + * Flush the bulk if there are frames left to send, kick the queue and flush + * the XDP maps. + */ +#define libeth_xdp_finalize_rx(bq, flush, finalize) \ + __libeth_xdp_finalize_rx(bq, 0, flush, finalize) + +static __always_inline void +__libeth_xdp_finalize_rx(struct libeth_xdp_tx_bulk *bq, u32 flags, + bool (*flush_bulk)(struct libeth_xdp_tx_bulk *bq, + u32 flags), + void (*finalize)(void *xdpsq, bool sent, bool flush)) +{ + if (bq->act_mask & LIBETH_XDP_TX) { + if (bq->count) + flush_bulk(bq, flags | LIBETH_XDP_TX_DROP); + finalize(bq->xdpsq, true, true); + } + if (bq->act_mask & LIBETH_XDP_REDIRECT) + xdp_do_flush(); + + rcu_read_unlock(); +} + /* Tx buffer completion */ void libeth_xdp_return_buff_bulk(const struct skb_shared_info *sinfo, diff --git a/drivers/net/ethernet/intel/libeth/xdp.c b/drivers/net/ethernet/intel/libeth/xdp.c index ca87789178ed..a20ba2478097 100644 --- a/drivers/net/ethernet/intel/libeth/xdp.c +++ b/drivers/net/ethernet/intel/libeth/xdp.c @@ -275,6 +275,33 @@ bool libeth_xdp_buff_add_frag(struct libeth_xdp_buff *xdp, } EXPORT_SYMBOL_GPL(libeth_xdp_buff_add_frag); +/** + * libeth_xdp_prog_exception - handle XDP prog exceptions + * @bq: XDP Tx bulk + * @xdp: buffer to process + * @act: original XDP prog verdict + * @ret: error code if redirect failed + * + * External helper used by __libeth_xdp_run_prog(), do not call directly. + * Reports invalid @act, XDP exception trace event and frees the buffer. + * + * Return: libeth_xdp XDP prog verdict. + */ +u32 __cold libeth_xdp_prog_exception(const struct libeth_xdp_tx_bulk *bq, + struct libeth_xdp_buff *xdp, + enum xdp_action act, int ret) +{ + if (act > XDP_REDIRECT) + bpf_warn_invalid_xdp_action(bq->dev, bq->prog, act); + + libeth_trace_xdp_exception(bq->dev, bq->prog, act); + + libeth_xdp_return_buff_slow(xdp); + + return LIBETH_XDP_DROP; +} +EXPORT_SYMBOL_GPL(libeth_xdp_prog_exception); + /* Tx buffer completion */ static void libeth_xdp_put_netmem_bulk(netmem_ref netmem, From patchwork Tue Apr 15 17:28:19 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 14052496 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C056A221702; Tue, 15 Apr 2025 17:29:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.17 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744738180; cv=none; b=uSOCxkjAzSc17ttoAzX2mbxrJrQYLBzhh/zAJhih6L0mvpZrUbbUAsep2JM6pEbQ0rY3bot9kVGLrv130zDgcO0KJerjhs5wiHPEUxRSYrCzh82EAIH1aBZ6dj4FOLFRk8v7mcCO3fH33xELlNKoTvDQxJOB5kE16J+S10jzO00= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744738180; c=relaxed/simple; bh=5hrhWZJgnjKQfFC++nTM/jCtOnI7N/Xn83OmaTwcQmY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=U1C1WsXK0nxYXDcO+umFX6RuDoDIMBISfHquHeMKd7iM6UaD8Tw3sTbSbmyrItwDwI5AGlEDhHKJ6santHW2zaTdCIZbXcuQmVpoTBZDr2Ao8ixRhWCG2ezWyXqQ1iyPpsi+Yd3v6H1AsKmjIHO9EGcoFg3UglAh5o6B6FWW2qk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=NKyH1bBi; arc=none smtp.client-ip=198.175.65.17 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="NKyH1bBi" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1744738176; x=1776274176; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=5hrhWZJgnjKQfFC++nTM/jCtOnI7N/Xn83OmaTwcQmY=; b=NKyH1bBimnOIkgcTxBtGU/lYTX6RkkbpkF6HwGS+W8ye4EqYt7mUfz3T iLRhN7UKzDMlwWYC7H4fFMoLGpwAl0aEQ1wblYBjKWq8qJ2dIi3M+YX9l TWIGqspRd+t78HgpgsVkq+b3FZNTzvZebUc7lqsyODgao2qAcsI+OmqKu Jht/Y4vf6DGnTyj48tylS0mvY++uHjb9zzH1GyhyYYqMvUwA5jHfaIVpY pitw87i1VePlglUe5Cw7IgztgpytHgXS9AWpPPvvxpAyKJyKbQSkIclH0 vFOsz2shjDJ6Z7413u3rn/qdbbYauPZTKN7vt5FKwwKHmGsOK/4z6meQw g==; X-CSE-ConnectionGUID: +SDFD77kQWmQ/kNIIJ8ViQ== X-CSE-MsgGUID: WhZf3rXgSAqourtTE2/ZKg== X-IronPort-AV: E=McAfee;i="6700,10204,11404"; a="46275753" X-IronPort-AV: E=Sophos;i="6.15,213,1739865600"; d="scan'208";a="46275753" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by orvoesa109.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Apr 2025 10:29:36 -0700 X-CSE-ConnectionGUID: lITRbMybRQ+wSgyqQ8lRiA== X-CSE-MsgGUID: zxgPunYXRgO4Ng42jP0G2A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,213,1739865600"; d="scan'208";a="130729806" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa010.fm.intel.com with ESMTP; 15 Apr 2025 10:29:31 -0700 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Cc: Alexander Lobakin , Michal Kubiak , Maciej Fijalkowski , Tony Nguyen , Przemek Kitszel , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Simon Horman , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH iwl-next 10/16] libeth: xdp: add templates for building driver-side callbacks Date: Tue, 15 Apr 2025 19:28:19 +0200 Message-ID: <20250415172825.3731091-11-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250415172825.3731091-1-aleksander.lobakin@intel.com> References: <20250415172825.3731091-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Defining driver-specific functions to pass to libeth_xdp functions can induce boilerplates and/or look a bit cryptic with all those layers of indirection. On the other hand, this indirection is needed to allow compilers to uninline big functions even when passed to __always_inline helpers (too much inlining also hurts performance in some cases), plus to reuse some XDP helpers in XSk code. Add macros to quickly build them, with the detailed kdoc. They take names of the actual callbacks for filling a Tx descriptor and other purely HW-specific things and wrap them appropriately. LIBETH_XDP_DEFINE_{BEGIN,END}() is needed for GCC 8+ unfortunately to let the drivers control which functions will be static and which global without hitting `-Wold-style-declaration`. Signed-off-by: Alexander Lobakin --- include/net/libeth/xdp.h | 195 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 195 insertions(+) diff --git a/include/net/libeth/xdp.h b/include/net/libeth/xdp.h index 104210da921e..643b0a8acab3 100644 --- a/include/net/libeth/xdp.h +++ b/include/net/libeth/xdp.h @@ -735,6 +735,9 @@ __libeth_xdp_tx_flush_bulk(struct libeth_xdp_tx_bulk *bq, u32 flags, * @flags: Tx flags, see above * @prep: driver callback to prepare the queue * @xmit: driver callback to fill a HW descriptor + * + * Use via LIBETH_XDP_DEFINE_FLUSH_TX() to define an ``XDP_TX`` driver + * callback. */ #define libeth_xdp_tx_flush_bulk(bq, flags, prep, xmit) \ __libeth_xdp_tx_flush_bulk(bq, flags, prep, libeth_xdp_tx_fill_buf, \ @@ -742,6 +745,25 @@ __libeth_xdp_tx_flush_bulk(struct libeth_xdp_tx_bulk *bq, u32 flags, /* .ndo_xdp_xmit() implementation */ +/** + * libeth_xdp_xmit_init_bulk - internal helper to initialize bulk for XDP xmit + * @bq: bulk to initialize + * @dev: target &net_device + * @xdpsqs: array of driver-specific XDPSQ structs + * @num: number of active XDPSQs (the above array length) + */ +#define libeth_xdp_xmit_init_bulk(bq, dev, xdpsqs, num) \ + __libeth_xdp_xmit_init_bulk(bq, dev, (xdpsqs)[libeth_xdpsq_id(num)]) + +static inline void __libeth_xdp_xmit_init_bulk(struct libeth_xdp_tx_bulk *bq, + struct net_device *dev, + void *xdpsq) +{ + bq->dev = dev; + bq->xdpsq = xdpsq; + bq->count = 0; +} + /** * libeth_xdp_xmit_frame_dma - internal helper to access DMA of an &xdp_frame * @xf: pointer to the XDP frame @@ -934,6 +956,9 @@ libeth_xdp_xmit_fill_buf(struct libeth_xdp_tx_frame frm, u32 i, * @flags: Tx flags, see __libeth_xdp_tx_flush_bulk() * @prep: driver callback to prepare the queue * @xmit: driver callback to fill a HW descriptor + * + * Use via LIBETH_XDP_DEFINE_FLUSH_XMIT() to define an XDP xmit driver + * callback. */ #define libeth_xdp_xmit_flush_bulk(bq, flags, prep, xmit) \ __libeth_xdp_tx_flush_bulk(bq, (flags) | LIBETH_XDP_TX_NDO, prep, \ @@ -993,6 +1018,44 @@ __libeth_xdp_xmit_do_bulk(struct libeth_xdp_tx_bulk *bq, return nxmit; } +/** + * libeth_xdp_xmit_do_bulk - implement full .ndo_xdp_xmit() in driver + * @dev: target &net_device + * @n: number of frames to send + * @fr: XDP frames to send + * @f: flags passed by the stack + * @xqs: array of XDPSQs driver structs + * @nqs: number of active XDPSQs, the above array length + * @fl: driver callback to flush an XDP xmit bulk + * @fin: driver cabback to finalize the queue + * + * If the driver has active XDPSQs, perform common checks and send the frames. + * Finalize the queue, if requested. + * + * Return: number of frames sent or -errno on error. + */ +#define libeth_xdp_xmit_do_bulk(dev, n, fr, f, xqs, nqs, fl, fin) \ + _libeth_xdp_xmit_do_bulk(dev, n, fr, f, xqs, nqs, fl, fin, \ + __UNIQUE_ID(bq_), __UNIQUE_ID(ret_), \ + __UNIQUE_ID(nqs_)) + +#define _libeth_xdp_xmit_do_bulk(d, n, fr, f, xqs, nqs, fl, fin, ub, ur, un) \ +({ \ + u32 un = (nqs); \ + int ur; \ + \ + if (likely(un)) { \ + struct libeth_xdp_tx_bulk ub; \ + \ + libeth_xdp_xmit_init_bulk(&ub, d, xqs, un); \ + ur = __libeth_xdp_xmit_do_bulk(&ub, fr, n, f, fl, fin); \ + } else { \ + ur = -ENXIO; \ + } \ + \ + ur; \ +}) + /* Rx polling path */ /** @@ -1298,6 +1361,7 @@ __libeth_xdp_run_flush(struct libeth_xdp_buff *xdp, * @fl: driver ``XDP_TX`` bulk flush callback * * Run the attached XDP program and handle all possible verdicts. + * Prefer using it via LIBETH_XDP_DEFINE_RUN{,_PASS,_PROG}(). * * Return: true if the buffer should be passed up the stack, false if the poll * should go to the next buffer. @@ -1429,6 +1493,137 @@ __libeth_xdp_finalize_rx(struct libeth_xdp_tx_bulk *bq, u32 flags, rcu_read_unlock(); } +/* + * Helpers to reduce boilerplate code in drivers. + * + * Typical driver Rx flow would be (excl. bulk and buff init, frag attach): + * + * LIBETH_XDP_DEFINE_START(); + * LIBETH_XDP_DEFINE_FLUSH_TX(static driver_xdp_flush_tx, driver_xdp_tx_prep, + * driver_xdp_xmit); + * LIBETH_XDP_DEFINE_RUN(static driver_xdp_run, driver_xdp_run_prog, + * driver_xdp_flush_tx, driver_populate_skb); + * LIBETH_XDP_DEFINE_FINALIZE(static driver_xdp_finalize_rx, + * driver_xdp_flush_tx, driver_xdp_finalize_sq); + * LIBETH_XDP_DEFINE_END(); + * + * This will build a set of 4 static functions. The compiler is free to decide + * whether to inline them. + * Then, in the NAPI polling function: + * + * while (packets < budget) { + * // ... + * driver_xdp_run(xdp, &bq, napi, &rs, desc); + * } + * driver_xdp_finalize_rx(&bq); + */ + +#define LIBETH_XDP_DEFINE_START() \ + __diag_push(); \ + __diag_ignore(GCC, 8, "-Wold-style-declaration", \ + "Allow specifying \'static\' after the return type") + +/** + * LIBETH_XDP_DEFINE_TIMER - define a driver XDPSQ cleanup timer callback + * @name: name of the function to define + * @poll: Tx polling/completion function + */ +#define LIBETH_XDP_DEFINE_TIMER(name, poll) \ +void name(struct work_struct *work) \ +{ \ + libeth_xdpsq_run_timer(work, poll); \ +} + +/** + * LIBETH_XDP_DEFINE_FLUSH_TX - define a driver ``XDP_TX`` bulk flush function + * @name: name of the function to define + * @prep: driver callback to clean an XDPSQ + * @xmit: driver callback to write a HW Tx descriptor + */ +#define LIBETH_XDP_DEFINE_FLUSH_TX(name, prep, xmit) \ + __LIBETH_XDP_DEFINE_FLUSH_TX(name, prep, xmit, xdp) + +#define __LIBETH_XDP_DEFINE_FLUSH_TX(name, prep, xmit, pfx) \ +bool name(struct libeth_xdp_tx_bulk *bq, u32 flags) \ +{ \ + return libeth_##pfx##_tx_flush_bulk(bq, flags, prep, xmit); \ +} + +/** + * LIBETH_XDP_DEFINE_FLUSH_XMIT - define a driver XDP xmit bulk flush function + * @name: name of the function to define + * @prep: driver callback to clean an XDPSQ + * @xmit: driver callback to write a HW Tx descriptor + */ +#define LIBETH_XDP_DEFINE_FLUSH_XMIT(name, prep, xmit) \ +bool name(struct libeth_xdp_tx_bulk *bq, u32 flags) \ +{ \ + return libeth_xdp_xmit_flush_bulk(bq, flags, prep, xmit); \ +} + +/** + * LIBETH_XDP_DEFINE_RUN_PROG - define a driver XDP program run function + * @name: name of the function to define + * @flush: driver callback to flush an ``XDP_TX`` bulk + */ +#define LIBETH_XDP_DEFINE_RUN_PROG(name, flush) \ + bool __LIBETH_XDP_DEFINE_RUN_PROG(name, flush, xdp) + +#define __LIBETH_XDP_DEFINE_RUN_PROG(name, flush, pfx) \ +name(struct libeth_xdp_buff *xdp, struct libeth_xdp_tx_bulk *bq) \ +{ \ + return libeth_##pfx##_run_prog(xdp, bq, flush); \ +} + +/** + * LIBETH_XDP_DEFINE_RUN_PASS - define a driver buffer process + pass function + * @name: name of the function to define + * @run: driver callback to run XDP program (above) + * @populate: driver callback to fill an skb with HW descriptor info + */ +#define LIBETH_XDP_DEFINE_RUN_PASS(name, run, populate) \ + void __LIBETH_XDP_DEFINE_RUN_PASS(name, run, populate, xdp) + +#define __LIBETH_XDP_DEFINE_RUN_PASS(name, run, populate, pfx) \ +name(struct libeth_xdp_buff *xdp, struct libeth_xdp_tx_bulk *bq, \ + struct napi_struct *napi, struct libeth_rq_napi_stats *ss, \ + const void *desc) \ +{ \ + return libeth_##pfx##_run_pass(xdp, bq, napi, ss, desc, run, \ + populate); \ +} + +/** + * LIBETH_XDP_DEFINE_RUN - define a driver buffer process, run + pass function + * @name: name of the function to define + * @run: name of the XDP prog run function to define + * @flush: driver callback to flush an ``XDP_TX`` bulk + * @populate: driver callback to fill an skb with HW descriptor info + */ +#define LIBETH_XDP_DEFINE_RUN(name, run, flush, populate) \ + __LIBETH_XDP_DEFINE_RUN(name, run, flush, populate, XDP) + +#define __LIBETH_XDP_DEFINE_RUN(name, run, flush, populate, pfx) \ + LIBETH_##pfx##_DEFINE_RUN_PROG(static run, flush); \ + LIBETH_##pfx##_DEFINE_RUN_PASS(name, run, populate) + +/** + * LIBETH_XDP_DEFINE_FINALIZE - define a driver Rx NAPI poll finalize function + * @name: name of the function to define + * @flush: driver callback to flush an ``XDP_TX`` bulk + * @finalize: driver callback to finalize an XDPSQ and run the timer + */ +#define LIBETH_XDP_DEFINE_FINALIZE(name, flush, finalize) \ + __LIBETH_XDP_DEFINE_FINALIZE(name, flush, finalize, xdp) + +#define __LIBETH_XDP_DEFINE_FINALIZE(name, flush, finalize, pfx) \ +void name(struct libeth_xdp_tx_bulk *bq) \ +{ \ + libeth_##pfx##_finalize_rx(bq, flush, finalize); \ +} + +#define LIBETH_XDP_DEFINE_END() __diag_pop() + /* Tx buffer completion */ void libeth_xdp_return_buff_bulk(const struct skb_shared_info *sinfo, From patchwork Tue Apr 15 17:28:20 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 14052497 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 74615221D83; Tue, 15 Apr 2025 17:29:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.17 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744738182; cv=none; b=pQQY8aXsNbsjdjYh22T29aFr86/UU/kTJV2UrF0q2xjpJizGycEvosvo5QW8nF3AzRZP8S8woX0/MERwl/aInA+2M1Gkj7tSiC7J05P36JQM8uQAyiHHrTIAeBi/ddHgZapxtXjOGwQk6In+l6Pvw72xsXRCaRT9wBGJW1Lnmio= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744738182; c=relaxed/simple; bh=7ne9x5DYOXVhNLNI7e4lJs8pi9J798b2PE8fgyt3Mh4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=dPP4L7h2IDHcmIlhYzGfwgaLxDe+bXwZaUEiMXjDEbE8hWn4EBFXKX6zpbHAh+DqIC+SY62TJ0H2ZFiUU+uoDcvi916fCLtXA7S9eC1uCjuojmavq9/KT4G1os8C9AqGmhuth1FWi6Rq8vqvOK0xoxxtKblY667TtlV9Ov0nxLU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Tz9j3XMn; arc=none smtp.client-ip=198.175.65.17 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Tz9j3XMn" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1744738181; x=1776274181; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=7ne9x5DYOXVhNLNI7e4lJs8pi9J798b2PE8fgyt3Mh4=; b=Tz9j3XMnbhBUC3NdDLS6HgoFxoioDsvGw/QpWmiI6EpdW12NxoIbS+vW icEHpppQJMxqchGQykJC8+Cu/GQgsmRpzI1ESk1SesrnDx3mfpuhB0HTA R7jlCBlVIDPLIV8uBySIYDd18u2vSTUrppVCmQyajzON9/UaU1iXSfyVf QaNg3Kqm2p9S7+FCZurrrpt1aFxPu9bhCLYai0LLBkyuv7TCEmE83BPkl WFsjCgZog7YXdyv+8HvggeWzfRae/BoOfdUtKVdARLdmTtJC8T7YDRswS iPreZLBRSBULugEZDXHo0WPGlRQ4pfAOb0AncgUaYS9jtVJbVgeRoxFiI g==; X-CSE-ConnectionGUID: pHNJBn60TzqX2S1bSPRl0w== X-CSE-MsgGUID: Ko1PmbfPRS6IEtK+aB0pjg== X-IronPort-AV: E=McAfee;i="6700,10204,11404"; a="46275766" X-IronPort-AV: E=Sophos;i="6.15,213,1739865600"; d="scan'208";a="46275766" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by orvoesa109.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Apr 2025 10:29:40 -0700 X-CSE-ConnectionGUID: TihOLHW3Tme1zl764xGDlg== X-CSE-MsgGUID: jOCW74qxTw6nIlb935hAPg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,213,1739865600"; d="scan'208";a="130729817" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa010.fm.intel.com with ESMTP; 15 Apr 2025 10:29:36 -0700 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Cc: Alexander Lobakin , Michal Kubiak , Maciej Fijalkowski , Tony Nguyen , Przemek Kitszel , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Simon Horman , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH iwl-next 11/16] libeth: xdp: add RSS hash hint and XDP features setup helpers Date: Tue, 15 Apr 2025 19:28:20 +0200 Message-ID: <20250415172825.3731091-12-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250415172825.3731091-1-aleksander.lobakin@intel.com> References: <20250415172825.3731091-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org End the XDP section by adding helpers to setup XDP features, flipping .ndo_xdp_xmit() support at runtime (in case when it's not always on), and calculating the queue clean/refill threshold. Signed-off-by: Alexander Lobakin --- include/net/libeth/xdp.h | 90 +++++++++++++++++++++++++ drivers/net/ethernet/intel/libeth/xdp.c | 69 +++++++++++++++++++ 2 files changed, 159 insertions(+) diff --git a/include/net/libeth/xdp.h b/include/net/libeth/xdp.h index 643b0a8acab3..f0b1160bee51 100644 --- a/include/net/libeth/xdp.h +++ b/include/net/libeth/xdp.h @@ -1624,6 +1624,51 @@ void name(struct libeth_xdp_tx_bulk *bq) \ #define LIBETH_XDP_DEFINE_END() __diag_pop() +/* XMO */ + +/** + * libeth_xdp_buff_to_rq - get RQ pointer from an XDP buffer pointer + * @xdp: &libeth_xdp_buff corresponding to the queue + * @type: typeof() of the driver Rx queue structure + * @member: name of &xdp_rxq_info inside @type + * + * Often times, pointer to the RQ is needed when reading/filling metadata from + * HW descriptors. The helper can be used to quickly jump from an XDP buffer + * to the queue corresponding to its &xdp_rxq_info without introducing + * additional fields (&libeth_xdp_buff is precisely 1 cacheline long on x64). + */ +#define libeth_xdp_buff_to_rq(xdp, type, member) \ + container_of_const((xdp)->base.rxq, type, member) + +/** + * libeth_xdpmo_rx_hash - convert &libeth_rx_pt to an XDP RSS hash metadata + * @hash: pointer to the variable to write the hash to + * @rss_type: pointer to the variable to write the hash type to + * @val: hash value from the HW descriptor + * @pt: libeth parsed packet type + * + * Handle zeroed/non-available hash and convert libeth parsed packet type to + * the corresponding XDP RSS hash type. To be called at the end of + * xdp_metadata_ops idpf_xdpmo::xmo_rx_hash() implementation. + * Note that if the driver doesn't use a constant packet type lookup table but + * generates it at runtime, it must call libeth_rx_pt_gen_hash_type(pt) to + * generate XDP RSS hash type for each packet type. + * + * Return: 0 on success, -ENODATA when the hash is not available. + */ +static inline int libeth_xdpmo_rx_hash(u32 *hash, + enum xdp_rss_hash_type *rss_type, + u32 val, struct libeth_rx_pt pt) +{ + if (unlikely(!val)) + return -ENODATA; + + *hash = val; + *rss_type = pt.hash_type; + + return 0; +} + /* Tx buffer completion */ void libeth_xdp_return_buff_bulk(const struct skb_shared_info *sinfo, @@ -1690,4 +1735,49 @@ static inline void libeth_xdp_complete_tx(struct libeth_sqe *sqe, __libeth_xdp_complete_tx(sqe, cp, libeth_xdp_return_buff_bulk); } +/* Misc */ + +u32 libeth_xdp_queue_threshold(u32 count); + +void __libeth_xdp_set_features(struct net_device *dev, + const struct xdp_metadata_ops *xmo); +void libeth_xdp_set_redirect(struct net_device *dev, bool enable); + +/** + * libeth_xdp_set_features - set XDP features for netdev + * @dev: &net_device to configure + * @...: optional params, see __libeth_xdp_set_features() + * + * Set all the features libeth_xdp supports, including .ndo_xdp_xmit(). That + * said, it should be used only when XDPSQs are always available regardless + * of whether an XDP prog is attached to @dev. + */ +#define libeth_xdp_set_features(dev, ...) \ + CONCATENATE(__libeth_xdp_feat, \ + COUNT_ARGS(__VA_ARGS__))(dev, ##__VA_ARGS__) + +#define __libeth_xdp_feat0(dev) \ + __libeth_xdp_set_features(dev, NULL) +#define __libeth_xdp_feat1(dev, xmo) \ + __libeth_xdp_set_features(dev, xmo) + +/** + * libeth_xdp_set_features_noredir - enable all libeth_xdp features w/o redir + * @dev: target &net_device + * @...: optional params, see __libeth_xdp_set_features() + * + * Enable everything except the .ndo_xdp_xmit() feature, use when XDPSQs are + * not available right after netdev registration. + */ +#define libeth_xdp_set_features_noredir(dev, ...) \ + __libeth_xdp_set_features_noredir(dev, __UNIQUE_ID(dev_), \ + ##__VA_ARGS__) + +#define __libeth_xdp_set_features_noredir(dev, ud, ...) do { \ + struct net_device *ud = (dev); \ + \ + libeth_xdp_set_features(ud, ##__VA_ARGS__); \ + libeth_xdp_set_redirect(ud, false); \ +} while (0) + #endif /* __LIBETH_XDP_H */ diff --git a/drivers/net/ethernet/intel/libeth/xdp.c b/drivers/net/ethernet/intel/libeth/xdp.c index a20ba2478097..975c34af2f0f 100644 --- a/drivers/net/ethernet/intel/libeth/xdp.c +++ b/drivers/net/ethernet/intel/libeth/xdp.c @@ -338,6 +338,75 @@ void libeth_xdp_return_buff_bulk(const struct skb_shared_info *sinfo, } EXPORT_SYMBOL_GPL(libeth_xdp_return_buff_bulk); +/* Misc */ + +/** + * libeth_xdp_queue_threshold - calculate XDP queue clean/refill threshold + * @count: number of descriptors in the queue + * + * The threshold is the limit at which RQs start to refill (when the number of + * empty buffers exceeds it) and SQs get cleaned up (when the number of free + * descriptors goes below it). To speed up hotpath processing, threshold is + * always pow-2, closest to 1/4 of the queue length. + * Don't call it on hotpath, calculate and cache the threshold during the + * queue initialization. + * + * Return: the calculated threshold. + */ +u32 libeth_xdp_queue_threshold(u32 count) +{ + u32 quarter, low, high; + + if (likely(is_power_of_2(count))) + return count >> 2; + + quarter = DIV_ROUND_CLOSEST(count, 4); + low = rounddown_pow_of_two(quarter); + high = roundup_pow_of_two(quarter); + + return high - quarter <= quarter - low ? high : low; +} +EXPORT_SYMBOL_GPL(libeth_xdp_queue_threshold); + +/** + * __libeth_xdp_set_features - set XDP features for netdev + * @dev: &net_device to configure + * @xmo: XDP metadata ops (Rx hints) + * + * Set all the features libeth_xdp supports. Only the first argument is + * necessary. + * Use the non-underscored versions in drivers instead. + */ +void __libeth_xdp_set_features(struct net_device *dev, + const struct xdp_metadata_ops *xmo) +{ + xdp_set_features_flag(dev, + NETDEV_XDP_ACT_BASIC | + NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_NDO_XMIT | + NETDEV_XDP_ACT_RX_SG | + NETDEV_XDP_ACT_NDO_XMIT_SG); + dev->xdp_metadata_ops = xmo; +} +EXPORT_SYMBOL_GPL(__libeth_xdp_set_features); + +/** + * libeth_xdp_set_redirect - toggle the XDP redirect feature + * @dev: &net_device to configure + * @enable: whether XDP is enabled + * + * Use this when XDPSQs are not always available to dynamically enable + * and disable redirect feature. + */ +void libeth_xdp_set_redirect(struct net_device *dev, bool enable) +{ + if (enable) + xdp_features_set_redirect_target(dev, true); + else + xdp_features_clear_redirect_target(dev); +} +EXPORT_SYMBOL_GPL(libeth_xdp_set_redirect); + /* Module */ static const struct libeth_xdp_ops xdp_ops __initconst = { From patchwork Tue Apr 15 17:28:21 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 14052498 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 57FA6221DB3; Tue, 15 Apr 2025 17:29:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.17 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744738186; cv=none; b=g5OaoHB5WlcIf8X+VftW2hHVychuQElwGl9mecBSZ11gfRs6KkL86PV+vOsQdeKmasaED8YDgGwrnQPGbvdW6u3n0nP4NJY3u3EY6lpXMIDhMApuESYWywuINOiM+9Fb5t9qfiYO7PUXEb7HuL6qqSXW/9FFmAPWgCiDzZ+lOo8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744738186; c=relaxed/simple; bh=4gS8Oi+ud5GCCWWLiDaufXxtrWjrhStykk+LzzSul8U=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=erx267rfuTeRSFthjXIScguIYpW+upqXZ43j5dgRvbgS2ukcijdufZ1XGJAEWJCoq3Z+qfshV8+5oammDlJlZu/kt28nw4gwj2lSw69zxWilACZQjdcuyNNHX97dkpQNMrBqFDBq4XIGcWBkevqE9W6wFia3bGJNtYOtAgAn9bY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=BeHp0Otk; arc=none smtp.client-ip=198.175.65.17 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="BeHp0Otk" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1744738185; x=1776274185; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=4gS8Oi+ud5GCCWWLiDaufXxtrWjrhStykk+LzzSul8U=; b=BeHp0OtkWPU//1kV2MUFB9Zk7Xvm2GRak0Xw5SJExYTkHNJ4JV6E+NuZ 261TV3AjXJiUxaK/acj1Mzb8Kf5aWrG6M1ip5GQRVydnb6isKdqGkzJA+ b8XRF2SFrSBP79KvVW8ktbSM2i5So1iPuaDY0qb2N4PIkKk5RLtDHPT5R nV2sbyig949EHEUmyAhwTyeLmh1kqjHJZBHHmisuKss5NQDb/9jDLjDp4 8nuhZu7EdMh5DgtRb3YMCkvuFAsqTwsGs/dqWSh/SQrSbRv8P9V1837Vs LUe0UXnlr9RfF73WXDDTu5qjA8NWPjLHVXETHtJ94bJ/sJlknR7snifFH A==; X-CSE-ConnectionGUID: S6OMYhg2QIaZrjm7b0X7Lw== X-CSE-MsgGUID: Dm1dNdFpSCuK3FhR+j8n2w== X-IronPort-AV: E=McAfee;i="6700,10204,11404"; a="46275779" X-IronPort-AV: E=Sophos;i="6.15,213,1739865600"; d="scan'208";a="46275779" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by orvoesa109.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Apr 2025 10:29:44 -0700 X-CSE-ConnectionGUID: dycd22E5TJSIXv43BIwh+Q== X-CSE-MsgGUID: /ohl+CDiRVmEoMP9mdFkeA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,213,1739865600"; d="scan'208";a="130729830" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa010.fm.intel.com with ESMTP; 15 Apr 2025 10:29:40 -0700 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Cc: Alexander Lobakin , Michal Kubiak , Maciej Fijalkowski , Tony Nguyen , Przemek Kitszel , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Simon Horman , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH iwl-next 12/16] libeth: xsk: add XSk XDP_TX sending helpers Date: Tue, 15 Apr 2025 19:28:21 +0200 Message-ID: <20250415172825.3731091-13-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250415172825.3731091-1-aleksander.lobakin@intel.com> References: <20250415172825.3731091-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Add Xsk counterparts for XDP_TX buffer sending and completion. The same base structures and functions used from the libeth_xdp core, with adjustments to that XSk Rx always operates on &xdp_buff_xsk for both head and frags. And unlike regular Rx, here unlikely() are used for frags, as the header split gives no benefits for XSk Rx, at least for now. Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/libeth/Kconfig | 2 +- drivers/net/ethernet/intel/libeth/Makefile | 1 + drivers/net/ethernet/intel/libeth/priv.h | 6 + include/net/libeth/tx.h | 6 + include/net/libeth/xdp.h | 26 +++- include/net/libeth/xsk.h | 148 +++++++++++++++++++++ drivers/net/ethernet/intel/libeth/tx.c | 5 +- drivers/net/ethernet/intel/libeth/xdp.c | 7 +- drivers/net/ethernet/intel/libeth/xsk.c | 32 +++++ 9 files changed, 224 insertions(+), 9 deletions(-) create mode 100644 include/net/libeth/xsk.h create mode 100644 drivers/net/ethernet/intel/libeth/xsk.c diff --git a/drivers/net/ethernet/intel/libeth/Kconfig b/drivers/net/ethernet/intel/libeth/Kconfig index d8c4926574fb..2445b979c499 100644 --- a/drivers/net/ethernet/intel/libeth/Kconfig +++ b/drivers/net/ethernet/intel/libeth/Kconfig @@ -12,4 +12,4 @@ config LIBETH_XDP tristate "Common XDP library (libeth_xdp)" if COMPILE_TEST select LIBETH help - XDP helpers based on libeth hotpath management. + XDP and XSk helpers based on libeth hotpath management. diff --git a/drivers/net/ethernet/intel/libeth/Makefile b/drivers/net/ethernet/intel/libeth/Makefile index 51669840ee06..350bc0b38bad 100644 --- a/drivers/net/ethernet/intel/libeth/Makefile +++ b/drivers/net/ethernet/intel/libeth/Makefile @@ -9,3 +9,4 @@ libeth-y += tx.o obj-$(CONFIG_LIBETH_XDP) += libeth_xdp.o libeth_xdp-y += xdp.o +libeth_xdp-y += xsk.o diff --git a/drivers/net/ethernet/intel/libeth/priv.h b/drivers/net/ethernet/intel/libeth/priv.h index 1bd6e2d7a3e7..ebcb26f24401 100644 --- a/drivers/net/ethernet/intel/libeth/priv.h +++ b/drivers/net/ethernet/intel/libeth/priv.h @@ -8,12 +8,18 @@ /* XDP */ +struct libeth_xdp_buff; +struct libeth_xdp_tx_frame; struct skb_shared_info; struct xdp_frame_bulk; +void libeth_xsk_tx_return_bulk(const struct libeth_xdp_tx_frame *bq, + u32 count); + struct libeth_xdp_ops { void (*bulk)(const struct skb_shared_info *sinfo, struct xdp_frame_bulk *bq, bool frags); + void (*xsk)(struct libeth_xdp_buff *xdp); }; void libeth_attach_xdp(const struct libeth_xdp_ops *ops); diff --git a/include/net/libeth/tx.h b/include/net/libeth/tx.h index 33b9bb22f6ac..44192bec86d7 100644 --- a/include/net/libeth/tx.h +++ b/include/net/libeth/tx.h @@ -21,6 +21,8 @@ * @LIBETH_SQE_XDP_TX: &skb_shared_info, libeth_xdp_return_buff_bulk(), stats * @LIBETH_SQE_XDP_XMIT: &xdp_frame, unmap and xdp_return_frame_bulk(), stats * @LIBETH_SQE_XDP_XMIT_FRAG: &xdp_frame frag, only unmap DMA + * @LIBETH_SQE_XSK_TX: &libeth_xdp_buff on XSk queue, xsk_buff_free(), stats + * @LIBETH_SQE_XSK_TX_FRAG: &libeth_xdp_buff frag on XSk queue, xsk_buff_free() */ enum libeth_sqe_type { LIBETH_SQE_EMPTY = 0U, @@ -33,6 +35,8 @@ enum libeth_sqe_type { LIBETH_SQE_XDP_TX = __LIBETH_SQE_XDP_START, LIBETH_SQE_XDP_XMIT, LIBETH_SQE_XDP_XMIT_FRAG, + LIBETH_SQE_XSK_TX, + LIBETH_SQE_XSK_TX_FRAG, }; /** @@ -43,6 +47,7 @@ enum libeth_sqe_type { * @skb: &sk_buff to consume * @sinfo: skb shared info of an XDP_TX frame * @xdpf: XDP frame from ::ndo_xdp_xmit() + * @xsk: XSk Rx frame from XDP_TX action * @dma: DMA address to unmap * @len: length of the mapped region to unmap * @nr_frags: number of frags in the frame this buffer belongs to @@ -59,6 +64,7 @@ struct libeth_sqe { struct sk_buff *skb; struct skb_shared_info *sinfo; struct xdp_frame *xdpf; + struct libeth_xdp_buff *xsk; }; DEFINE_DMA_UNMAP_ADDR(dma); diff --git a/include/net/libeth/xdp.h b/include/net/libeth/xdp.h index f0b1160bee51..dcda8f7decfa 100644 --- a/include/net/libeth/xdp.h +++ b/include/net/libeth/xdp.h @@ -279,6 +279,7 @@ libeth_xdpsq_run_timer(struct work_struct *work, * @LIBETH_XDP_TX_BATCH: batch size for which the queue fill loop is unrolled * @LIBETH_XDP_TX_DROP: indicates the send function must drop frames not sent * @LIBETH_XDP_TX_NDO: whether the send function is called from .ndo_xdp_xmit() + * @LIBETH_XDP_TX_XSK: whether the function is called for ``XDP_TX`` for XSk */ enum { LIBETH_XDP_TX_BULK = DEV_MAP_BULK_SIZE, @@ -286,6 +287,7 @@ enum { LIBETH_XDP_TX_DROP = BIT(0), LIBETH_XDP_TX_NDO = BIT(1), + LIBETH_XDP_TX_XSK = BIT(2), }; /** @@ -314,7 +316,8 @@ enum { * @frag: one (non-head) frag for ``XDP_TX`` * @xdpf: &xdp_frame for the head frag for .ndo_xdp_xmit() * @dma: DMA address of the non-head frag for .ndo_xdp_xmit() - * @len: frag length for .ndo_xdp_xmit() + * @xsk: ``XDP_TX`` for XSk, XDP buffer for any frag + * @len: frag length for XSk ``XDP_TX`` and .ndo_xdp_xmit() * @flags: Tx flags for the above * @opts: combined @len + @flags for the above for speed */ @@ -330,11 +333,13 @@ struct libeth_xdp_tx_frame { /* ``XDP_TX`` frag */ skb_frag_t frag; - /* .ndo_xdp_xmit() */ + /* .ndo_xdp_xmit(), XSk ``XDP_TX`` */ struct { union { struct xdp_frame *xdpf; dma_addr_t dma; + + struct libeth_xdp_buff *xsk; }; union { struct { @@ -375,6 +380,7 @@ struct libeth_xdp_tx_bulk { /** * struct libeth_xdpsq - abstraction for an XDPSQ + * @pool: XSk buffer pool for XSk ``XDP_TX`` * @sqes: array of Tx buffers from the actual queue struct * @descs: opaque pointer to the HW descriptor array * @ntu: pointer to the next free descriptor index @@ -388,6 +394,7 @@ struct libeth_xdp_tx_bulk { * functions can access and modify driver-specific resources. */ struct libeth_xdpsq { + struct xsk_buff_pool *pool; struct libeth_sqe *sqes; void *descs; @@ -690,7 +697,7 @@ void libeth_xdp_tx_exception(struct libeth_xdp_tx_bulk *bq, u32 sent, /** * __libeth_xdp_tx_flush_bulk - internal helper to flush one XDP Tx bulk * @bq: bulk to flush - * @flags: XDP TX flags (.ndo_xdp_xmit() etc.) + * @flags: XDP TX flags (.ndo_xdp_xmit(), XSk etc.) * @prep: driver-specific callback to prepare the queue for sending * @fill: libeth_xdp callback to fill &libeth_sqe and &libeth_xdp_tx_desc * @xmit: driver callback to fill a HW descriptor @@ -1673,12 +1680,14 @@ static inline int libeth_xdpmo_rx_hash(u32 *hash, void libeth_xdp_return_buff_bulk(const struct skb_shared_info *sinfo, struct xdp_frame_bulk *bq, bool frags); +void libeth_xsk_buff_free_slow(struct libeth_xdp_buff *xdp); /** * __libeth_xdp_complete_tx - complete sent XDPSQE * @sqe: SQ element / Tx buffer to complete * @cp: Tx polling/completion params * @bulk: internal callback to bulk-free ``XDP_TX`` buffers + * @xsk: internal callback to free XSk ``XDP_TX`` buffers * * Use the non-underscored version in drivers instead. This one is shared * internally with libeth_tx_complete_any(). @@ -1687,7 +1696,8 @@ void libeth_xdp_return_buff_bulk(const struct skb_shared_info *sinfo, */ static __always_inline void __libeth_xdp_complete_tx(struct libeth_sqe *sqe, struct libeth_cq_pp *cp, - typeof(libeth_xdp_return_buff_bulk) bulk) + typeof(libeth_xdp_return_buff_bulk) bulk, + typeof(libeth_xsk_buff_free_slow) xsk) { enum libeth_sqe_type type = sqe->type; @@ -1710,6 +1720,10 @@ __libeth_xdp_complete_tx(struct libeth_sqe *sqe, struct libeth_cq_pp *cp, case LIBETH_SQE_XDP_XMIT: xdp_return_frame_bulk(sqe->xdpf, cp->bq); break; + case LIBETH_SQE_XSK_TX: + case LIBETH_SQE_XSK_TX_FRAG: + xsk(sqe->xsk); + break; default: break; } @@ -1717,6 +1731,7 @@ __libeth_xdp_complete_tx(struct libeth_sqe *sqe, struct libeth_cq_pp *cp, switch (type) { case LIBETH_SQE_XDP_TX: case LIBETH_SQE_XDP_XMIT: + case LIBETH_SQE_XSK_TX: cp->xdp_tx -= sqe->nr_frags; cp->xss->packets++; @@ -1732,7 +1747,8 @@ __libeth_xdp_complete_tx(struct libeth_sqe *sqe, struct libeth_cq_pp *cp, static inline void libeth_xdp_complete_tx(struct libeth_sqe *sqe, struct libeth_cq_pp *cp) { - __libeth_xdp_complete_tx(sqe, cp, libeth_xdp_return_buff_bulk); + __libeth_xdp_complete_tx(sqe, cp, libeth_xdp_return_buff_bulk, + libeth_xsk_buff_free_slow); } /* Misc */ diff --git a/include/net/libeth/xsk.h b/include/net/libeth/xsk.h new file mode 100644 index 000000000000..af69b46fa7e4 --- /dev/null +++ b/include/net/libeth/xsk.h @@ -0,0 +1,148 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2025 Intel Corporation */ + +#ifndef __LIBETH_XSK_H +#define __LIBETH_XSK_H + +#include +#include + +/* ``XDP_TX`` bulking */ + +/** + * libeth_xsk_tx_queue_head - internal helper for queueing XSk ``XDP_TX`` head + * @bq: XDP Tx bulk to queue the head frag to + * @xdp: XSk buffer with the head to queue + * + * Return: false if it's the only frag of the frame, true if it's an S/G frame. + */ +static inline bool libeth_xsk_tx_queue_head(struct libeth_xdp_tx_bulk *bq, + struct libeth_xdp_buff *xdp) +{ + bq->bulk[bq->count++] = (typeof(*bq->bulk)){ + .xsk = xdp, + .len = xdp->base.data_end - xdp->data, + .flags = LIBETH_XDP_TX_FIRST, + }; + + if (likely(!xdp_buff_has_frags(&xdp->base))) + return false; + + bq->bulk[bq->count - 1].flags |= LIBETH_XDP_TX_MULTI; + + return true; +} + +/** + * libeth_xsk_tx_queue_frag - internal helper for queueing XSk ``XDP_TX`` frag + * @bq: XDP Tx bulk to queue the frag to + * @frag: XSk frag to queue + */ +static inline void libeth_xsk_tx_queue_frag(struct libeth_xdp_tx_bulk *bq, + struct libeth_xdp_buff *frag) +{ + bq->bulk[bq->count++] = (typeof(*bq->bulk)){ + .xsk = frag, + .len = frag->base.data_end - frag->data, + }; +} + +/** + * libeth_xsk_tx_queue_bulk - internal helper for queueing XSk ``XDP_TX`` frame + * @bq: XDP Tx bulk to queue the frame to + * @xdp: XSk buffer to queue + * @flush_bulk: driver callback to flush the bulk to the HW queue + * + * Return: true on success, false on flush error. + */ +static __always_inline bool +libeth_xsk_tx_queue_bulk(struct libeth_xdp_tx_bulk *bq, + struct libeth_xdp_buff *xdp, + bool (*flush_bulk)(struct libeth_xdp_tx_bulk *bq, + u32 flags)) +{ + bool ret = true; + + if (unlikely(bq->count == LIBETH_XDP_TX_BULK) && + unlikely(!flush_bulk(bq, LIBETH_XDP_TX_XSK))) { + libeth_xsk_buff_free_slow(xdp); + return false; + } + + if (!libeth_xsk_tx_queue_head(bq, xdp)) + goto out; + + for (const struct libeth_xdp_buff *head = xdp; ; ) { + xdp = container_of(xsk_buff_get_frag(&head->base), + typeof(*xdp), base); + if (!xdp) + break; + + if (unlikely(bq->count == LIBETH_XDP_TX_BULK) && + unlikely(!flush_bulk(bq, LIBETH_XDP_TX_XSK))) { + ret = false; + break; + } + + libeth_xsk_tx_queue_frag(bq, xdp); + } + +out: + bq->bulk[bq->count - 1].flags |= LIBETH_XDP_TX_LAST; + + return ret; +} + +/** + * libeth_xsk_tx_fill_buf - internal helper to fill XSk ``XDP_TX`` &libeth_sqe + * @frm: XDP Tx frame from the bulk + * @i: index on the HW queue + * @sq: XDPSQ abstraction for the queue + * @priv: private data + * + * Return: XDP Tx descriptor with the synced DMA and other info to pass to + * the driver callback. + */ +static inline struct libeth_xdp_tx_desc +libeth_xsk_tx_fill_buf(struct libeth_xdp_tx_frame frm, u32 i, + const struct libeth_xdpsq *sq, u64 priv) +{ + struct libeth_xdp_buff *xdp = frm.xsk; + struct libeth_xdp_tx_desc desc = { + .addr = xsk_buff_xdp_get_dma(&xdp->base), + .opts = frm.opts, + }; + struct libeth_sqe *sqe; + + xsk_buff_raw_dma_sync_for_device(sq->pool, desc.addr, desc.len); + + sqe = &sq->sqes[i]; + sqe->xsk = xdp; + + if (!(desc.flags & LIBETH_XDP_TX_FIRST)) { + sqe->type = LIBETH_SQE_XSK_TX_FRAG; + return desc; + } + + sqe->type = LIBETH_SQE_XSK_TX; + libeth_xdp_tx_fill_stats(sqe, &desc, + xdp_get_shared_info_from_buff(&xdp->base)); + + return desc; +} + +/** + * libeth_xsk_tx_flush_bulk - wrapper to define flush of XSk ``XDP_TX`` bulk + * @bq: bulk to flush + * @flags: Tx flags, see __libeth_xdp_tx_flush_bulk() + * @prep: driver callback to prepare the queue + * @xmit: driver callback to fill a HW descriptor + * + * Use via LIBETH_XSK_DEFINE_FLUSH_TX() to define an XSk ``XDP_TX`` driver + * callback. + */ +#define libeth_xsk_tx_flush_bulk(bq, flags, prep, xmit) \ + __libeth_xdp_tx_flush_bulk(bq, (flags) | LIBETH_XDP_TX_XSK, prep, \ + libeth_xsk_tx_fill_buf, xmit) + +#endif /* __LIBETH_XSK_H */ diff --git a/drivers/net/ethernet/intel/libeth/tx.c b/drivers/net/ethernet/intel/libeth/tx.c index 227c841ab16a..e0167f43d2a8 100644 --- a/drivers/net/ethernet/intel/libeth/tx.c +++ b/drivers/net/ethernet/intel/libeth/tx.c @@ -10,6 +10,7 @@ /* Tx buffer completion */ DEFINE_STATIC_CALL_NULL(bulk, libeth_xdp_return_buff_bulk); +DEFINE_STATIC_CALL_NULL(xsk, libeth_xsk_buff_free_slow); /** * libeth_tx_complete_any - perform Tx completion for one SQE of any type @@ -23,7 +24,8 @@ DEFINE_STATIC_CALL_NULL(bulk, libeth_xdp_return_buff_bulk); void libeth_tx_complete_any(struct libeth_sqe *sqe, struct libeth_cq_pp *cp) { if (sqe->type >= __LIBETH_SQE_XDP_START) - __libeth_xdp_complete_tx(sqe, cp, static_call(bulk)); + __libeth_xdp_complete_tx(sqe, cp, static_call(bulk), + static_call(xsk)); else libeth_tx_complete(sqe, cp); } @@ -34,5 +36,6 @@ EXPORT_SYMBOL_GPL(libeth_tx_complete_any); void libeth_attach_xdp(const struct libeth_xdp_ops *ops) { static_call_update(bulk, ops ? ops->bulk : NULL); + static_call_update(xsk, ops ? ops->xsk : NULL); } EXPORT_SYMBOL_GPL(libeth_attach_xdp); diff --git a/drivers/net/ethernet/intel/libeth/xdp.c b/drivers/net/ethernet/intel/libeth/xdp.c index 975c34af2f0f..0aed2db00b28 100644 --- a/drivers/net/ethernet/intel/libeth/xdp.c +++ b/drivers/net/ethernet/intel/libeth/xdp.c @@ -112,7 +112,7 @@ static void __cold libeth_trace_xdp_exception(const struct net_device *dev, * libeth_xdp_tx_exception - handle Tx exceptions of XDP frames * @bq: XDP Tx frame bulk * @sent: number of frames sent successfully (from this bulk) - * @flags: internal libeth_xdp flags (.ndo_xdp_xmit etc.) + * @flags: internal libeth_xdp flags (XSk, .ndo_xdp_xmit etc.) * * Cold helper used by __libeth_xdp_tx_flush_bulk(), do not call directly. * Reports XDP Tx exceptions, frees the frames that won't be sent or adjust @@ -134,7 +134,9 @@ void __cold libeth_xdp_tx_exception(struct libeth_xdp_tx_bulk *bq, u32 sent, return; } - if (!(flags & LIBETH_XDP_TX_NDO)) + if (flags & LIBETH_XDP_TX_XSK) + libeth_xsk_tx_return_bulk(pos, left); + else if (!(flags & LIBETH_XDP_TX_NDO)) libeth_xdp_tx_return_bulk(pos, left); else libeth_xdp_xmit_return_bulk(pos, left, bq->dev); @@ -411,6 +413,7 @@ EXPORT_SYMBOL_GPL(libeth_xdp_set_redirect); static const struct libeth_xdp_ops xdp_ops __initconst = { .bulk = libeth_xdp_return_buff_bulk, + .xsk = libeth_xsk_buff_free_slow, }; static int __init libeth_xdp_module_init(void) diff --git a/drivers/net/ethernet/intel/libeth/xsk.c b/drivers/net/ethernet/intel/libeth/xsk.c new file mode 100644 index 000000000000..7cc47d515d45 --- /dev/null +++ b/drivers/net/ethernet/intel/libeth/xsk.c @@ -0,0 +1,32 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (C) 2025 Intel Corporation */ + +#define DEFAULT_SYMBOL_NAMESPACE "LIBETH_XDP" + +#include + +#include "priv.h" + +/* ``XDP_TX`` bulking */ + +void __cold libeth_xsk_tx_return_bulk(const struct libeth_xdp_tx_frame *bq, + u32 count) +{ + for (u32 i = 0; i < count; i++) + libeth_xsk_buff_free_slow(bq[i].xsk); +} + +/* Rx polling path */ + +/** + * libeth_xsk_buff_free_slow - free an XSk Rx buffer + * @xdp: buffer to free + * + * Slowpath version of xsk_buff_free() to be used on exceptions, cleanups etc. + * to avoid unwanted inlining. + */ +void libeth_xsk_buff_free_slow(struct libeth_xdp_buff *xdp) +{ + xsk_buff_free(&xdp->base); +} +EXPORT_SYMBOL_GPL(libeth_xsk_buff_free_slow); From patchwork Tue Apr 15 17:28:22 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 14052499 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 77C23235BF4; Tue, 15 Apr 2025 17:29:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.17 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744738190; cv=none; b=ASE6doPGww5h6gy5Hq/8IH8LLRU2W4cnHjPx3iGBbR9JUwcZnG7XDos3JfOvBwcl4/W8H4Uoh4jxO1wZz92ClF/3uKS3U95qR717Rk9YX5DUz5v7av1BbB8QKAqB9fY+X7eAYbbmBUvPCsMwDe5Sby8YXtPzmVOd/42M3w+tQ/8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744738190; c=relaxed/simple; bh=fm/7M09u7wdY+EzwAgtQJKDVlkJnBcxQAYqWUlF/pN0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=IKzIwPEsk1CarlTEwxLEn9hBXmPGR4xieTrjVdrJO23Fqf5i+2Ivy29aG/fqQl2MDBKduwbGV7Z5cCHtcWD+nH8o1vzZZ+LiLrjcexOlmoAL0JfbjgrfABPvSPwFjBKJ/Ny3c4gNcltVQkxpAzTMuPuNDLBH7wJ/3aeG5o5Zelw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=ctcvFEbN; arc=none smtp.client-ip=198.175.65.17 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="ctcvFEbN" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1744738189; x=1776274189; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=fm/7M09u7wdY+EzwAgtQJKDVlkJnBcxQAYqWUlF/pN0=; b=ctcvFEbNY5k0dML+Y8gewfbndxgP7TiacynFTmOm62yd2R3Nu2ceuIdu vzWl97s4g4YpcxDsTcBBTkmapF944XRJ9rqGAhbfEj1vp8uG3GyUWecuX +BUsM4p59ef3+RHtSXWeXHyV17wp/diMU6aBoIwkrXncZqOMlDFAOaxrv eIM2LMd0YhmJwRp/kKTbUTBVOlix7TTHQIjzgl0EBS44eUpuI0KtzLaY2 zXYSULDECcNF5/IQy0POrn4EnEO0DWQZDppeJMEUm1HA5xPFuSu/7A2NC uzaupwzvSAec7nfaJp8kph5vQ5tHHK70pX1OHNZCpEhl/gqKlPLfxeG6G A==; X-CSE-ConnectionGUID: NxxFcHWERd+DXNto9dkk+g== X-CSE-MsgGUID: 4MpX5m3TQn+uoD9nYXR4Bg== X-IronPort-AV: E=McAfee;i="6700,10204,11404"; a="46275791" X-IronPort-AV: E=Sophos;i="6.15,213,1739865600"; d="scan'208";a="46275791" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by orvoesa109.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Apr 2025 10:29:48 -0700 X-CSE-ConnectionGUID: gmznwfWASXCmnHDKG3xWqQ== X-CSE-MsgGUID: K+TukdwhRje6hsvOYHHiZQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,213,1739865600"; d="scan'208";a="130729868" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa010.fm.intel.com with ESMTP; 15 Apr 2025 10:29:44 -0700 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Cc: Alexander Lobakin , Michal Kubiak , Maciej Fijalkowski , Tony Nguyen , Przemek Kitszel , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Simon Horman , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH iwl-next 13/16] libeth: xsk: add XSk xmit functions Date: Tue, 15 Apr 2025 19:28:22 +0200 Message-ID: <20250415172825.3731091-14-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250415172825.3731091-1-aleksander.lobakin@intel.com> References: <20250415172825.3731091-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Reuse core sending functions to send XSk xmit frames. Both metadata and no metadata pools/driver are supported. libeth_xdp also provides generic XSk metadata ops, currently with the checksum offload only and for cases when HW doesn't require supplying L3/L4 checksum offsets. Drivers are free to pass their own ops. &libeth_xdp_tx_bulk is not used here as it would be redundant; pool->tx_descs are accessed directly. Fake "libeth_xsktmo" is needed to hide implementation details from the drivers when they want to use the generic ops: the original struct is defined in the same file where dev->xsk_tx_metadata_ops gets set to avoid duplication of slowpath; at the same time; XSk xmit functions use local "fast" copy to inline XMO callbacks. Tx descriptor filling loop is unrolled by 8. Suggested-by: Maciej Fijalkowski # optimizations Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/libeth/priv.h | 2 + include/net/libeth/tx.h | 4 +- include/net/libeth/xdp.h | 73 ++++++++-- include/net/libeth/xsk.h | 166 +++++++++++++++++++++++ drivers/net/ethernet/intel/libeth/xdp.c | 14 +- drivers/net/ethernet/intel/libeth/xsk.c | 6 + 6 files changed, 248 insertions(+), 17 deletions(-) diff --git a/drivers/net/ethernet/intel/libeth/priv.h b/drivers/net/ethernet/intel/libeth/priv.h index ebcb26f24401..03e74382b2cb 100644 --- a/drivers/net/ethernet/intel/libeth/priv.h +++ b/drivers/net/ethernet/intel/libeth/priv.h @@ -13,6 +13,8 @@ struct libeth_xdp_tx_frame; struct skb_shared_info; struct xdp_frame_bulk; +extern const struct xsk_tx_metadata_ops libeth_xsktmo_slow; + void libeth_xsk_tx_return_bulk(const struct libeth_xdp_tx_frame *bq, u32 count); diff --git a/include/net/libeth/tx.h b/include/net/libeth/tx.h index 44192bec86d7..c3db5c6f1641 100644 --- a/include/net/libeth/tx.h +++ b/include/net/libeth/tx.h @@ -12,7 +12,7 @@ /** * enum libeth_sqe_type - type of &libeth_sqe to act on Tx completion - * @LIBETH_SQE_EMPTY: unused/empty OR XDP_TX frag, no action required + * @LIBETH_SQE_EMPTY: unused/empty OR XDP_TX/XSk frame, no action required * @LIBETH_SQE_CTX: context descriptor with empty SQE, no action required * @LIBETH_SQE_SLAB: kmalloc-allocated buffer, unmap and kfree() * @LIBETH_SQE_FRAG: mapped skb frag, only unmap DMA @@ -93,7 +93,7 @@ struct libeth_sqe { * @bq: XDP frame bulk to combine return operations * @ss: onstack NAPI stats to fill * @xss: onstack XDPSQ NAPI stats to fill - * @xdp_tx: number of XDP frames processed + * @xdp_tx: number of XDP-not-XSk frames processed * @napi: whether it's called from the NAPI context * * libeth uses this structure to access objects needed for performing full diff --git a/include/net/libeth/xdp.h b/include/net/libeth/xdp.h index dcda8f7decfa..feb74dc36f1b 100644 --- a/include/net/libeth/xdp.h +++ b/include/net/libeth/xdp.h @@ -293,6 +293,8 @@ enum { /** * enum - &libeth_xdp_tx_frame and &libeth_xdp_tx_desc flags * @LIBETH_XDP_TX_LEN: only for ``XDP_TX``, [15:0] of ::len_fl is actual length + * @LIBETH_XDP_TX_CSUM: for XSk xmit, enable checksum offload + * @LIBETH_XDP_TX_XSKMD: for XSk xmit, mask of the metadata bits * @LIBETH_XDP_TX_FIRST: indicates the frag is the first one of the frame * @LIBETH_XDP_TX_LAST: whether the frag is the last one of the frame * @LIBETH_XDP_TX_MULTI: whether the frame contains several frags @@ -301,6 +303,9 @@ enum { enum { LIBETH_XDP_TX_LEN = GENMASK(15, 0), + LIBETH_XDP_TX_CSUM = XDP_TXMD_FLAGS_CHECKSUM, + LIBETH_XDP_TX_XSKMD = LIBETH_XDP_TX_LEN, + LIBETH_XDP_TX_FIRST = BIT(16), LIBETH_XDP_TX_LAST = BIT(17), LIBETH_XDP_TX_MULTI = BIT(18), @@ -320,6 +325,7 @@ enum { * @len: frag length for XSk ``XDP_TX`` and .ndo_xdp_xmit() * @flags: Tx flags for the above * @opts: combined @len + @flags for the above for speed + * @desc: XSk xmit descriptor for direct casting */ struct libeth_xdp_tx_frame { union { @@ -349,10 +355,14 @@ struct libeth_xdp_tx_frame { aligned_u64 opts; }; }; + + /* XSk xmit */ + struct xdp_desc desc; }; -} __aligned_largest; +} __aligned(sizeof(struct xdp_desc)); static_assert(offsetof(struct libeth_xdp_tx_frame, frag.len) == offsetof(struct libeth_xdp_tx_frame, len_fl)); +static_assert(sizeof(struct libeth_xdp_tx_frame) == sizeof(struct xdp_desc)); /** * struct libeth_xdp_tx_bulk - XDP Tx frame bulk for bulk sending @@ -363,10 +373,13 @@ static_assert(offsetof(struct libeth_xdp_tx_frame, frag.len) == * @count: current number of frames in @bulk * @bulk: array of queued frames for bulk Tx * - * All XDP Tx operations queue each frame to the bulk first and flush it - * when @count reaches the array end. Bulk is always placed on the stack - * for performance. One bulk element contains all the data necessary + * All XDP Tx operations except XSk xmit queue each frame to the bulk first + * and flush it when @count reaches the array end. Bulk is always placed on + * the stack for performance. One bulk element contains all the data necessary * for sending a frame and then freeing it on completion. + * For XSk xmit, Tx descriptor array from &xsk_buff_pool is casted directly + * to &libeth_xdp_tx_frame as they are compatible and the bulk structure is + * not used. */ struct libeth_xdp_tx_bulk { const struct bpf_prog *prog; @@ -380,13 +393,13 @@ struct libeth_xdp_tx_bulk { /** * struct libeth_xdpsq - abstraction for an XDPSQ - * @pool: XSk buffer pool for XSk ``XDP_TX`` + * @pool: XSk buffer pool for XSk ``XDP_TX`` and xmit * @sqes: array of Tx buffers from the actual queue struct * @descs: opaque pointer to the HW descriptor array * @ntu: pointer to the next free descriptor index * @count: number of descriptors on that queue * @pending: pointer to the number of sent-not-completed descs on that queue - * @xdp_tx: pointer to the above + * @xdp_tx: pointer to the above, but only for non-XSk-xmit frames * @lock: corresponding XDPSQ lock * * Abstraction for driver-independent implementation of Tx. Placed on the stack @@ -427,6 +440,30 @@ struct libeth_xdp_tx_desc { }; } __aligned_largest; +/** + * libeth_xdp_ptr_to_priv - convert pointer to a libeth_xdp u64 priv + * @ptr: pointer to convert + * + * The main sending function passes private data as the largest scalar, u64. + * Use this helper when you want to pass a pointer there. + */ +#define libeth_xdp_ptr_to_priv(ptr) ({ \ + typecheck_pointer(ptr); \ + ((u64)(uintptr_t)(ptr)); \ +}) +/** + * libeth_xdp_priv_to_ptr - convert libeth_xdp u64 priv to a pointer + * @priv: private data to convert + * + * The main sending function passes private data as the largest scalar, u64. + * Use this helper when your callback takes this u64 and you want to convert + * it back to a pointer. + */ +#define libeth_xdp_priv_to_ptr(priv) ({ \ + static_assert(__same_type(priv, u64)); \ + ((const void *)(uintptr_t)(priv)); \ +}) + /** * libeth_xdp_tx_xmit_bulk - main XDP Tx function * @bulk: array of frames to send @@ -439,10 +476,11 @@ struct libeth_xdp_tx_desc { * @xmit: callback for filling a HW descriptor with the frame info * * Internal abstraction for placing @n XDP Tx frames on the HW XDPSQ. Used for - * all types of frames. + * all types of frames: ``XDP_TX``, .ndo_xdp_xmit(), XSk ``XDP_TX``, and XSk + * xmit. * @prep must lock the queue as this function releases it at the end. @unroll - * greatly increases the object code size, but also greatly increases - * performance. + * greatly increases the object code size, but also greatly increases XSk xmit + * performance; for other types of frames, it's not enabled. * The compilers inline all those onstack abstractions to direct data accesses. * * Return: number of frames actually placed on the queue, <= @n. The function @@ -702,7 +740,8 @@ void libeth_xdp_tx_exception(struct libeth_xdp_tx_bulk *bq, u32 sent, * @fill: libeth_xdp callback to fill &libeth_sqe and &libeth_xdp_tx_desc * @xmit: driver callback to fill a HW descriptor * - * Internal abstraction to create bulk flush functions for drivers. + * Internal abstraction to create bulk flush functions for drivers. Used for + * everything except XSk xmit. * * Return: true if anything was sent, false otherwise. */ @@ -1756,7 +1795,9 @@ static inline void libeth_xdp_complete_tx(struct libeth_sqe *sqe, u32 libeth_xdp_queue_threshold(u32 count); void __libeth_xdp_set_features(struct net_device *dev, - const struct xdp_metadata_ops *xmo); + const struct xdp_metadata_ops *xmo, + u32 zc_segs, + const struct xsk_tx_metadata_ops *tmo); void libeth_xdp_set_redirect(struct net_device *dev, bool enable); /** @@ -1773,9 +1814,13 @@ void libeth_xdp_set_redirect(struct net_device *dev, bool enable); COUNT_ARGS(__VA_ARGS__))(dev, ##__VA_ARGS__) #define __libeth_xdp_feat0(dev) \ - __libeth_xdp_set_features(dev, NULL) + __libeth_xdp_set_features(dev, NULL, 0, NULL) #define __libeth_xdp_feat1(dev, xmo) \ - __libeth_xdp_set_features(dev, xmo) + __libeth_xdp_set_features(dev, xmo, 0, NULL) +#define __libeth_xdp_feat2(dev, xmo, zc_segs) \ + __libeth_xdp_set_features(dev, xmo, zc_segs, NULL) +#define __libeth_xdp_feat3(dev, xmo, zc_segs, tmo) \ + __libeth_xdp_set_features(dev, xmo, zc_segs, tmo) /** * libeth_xdp_set_features_noredir - enable all libeth_xdp features w/o redir @@ -1796,4 +1841,6 @@ void libeth_xdp_set_redirect(struct net_device *dev, bool enable); libeth_xdp_set_redirect(ud, false); \ } while (0) +#define libeth_xsktmo ((const void *)GOLDEN_RATIO_PRIME) + #endif /* __LIBETH_XDP_H */ diff --git a/include/net/libeth/xsk.h b/include/net/libeth/xsk.h index af69b46fa7e4..16ca195981fe 100644 --- a/include/net/libeth/xsk.h +++ b/include/net/libeth/xsk.h @@ -7,6 +7,11 @@ #include #include +/* ``XDP_TXMD_FLAGS_VALID`` is defined only under ``CONFIG_XDP_SOCKETS`` */ +#ifdef XDP_TXMD_FLAGS_VALID +static_assert(XDP_TXMD_FLAGS_VALID <= LIBETH_XDP_TX_XSKMD); +#endif + /* ``XDP_TX`` bulking */ /** @@ -145,4 +150,165 @@ libeth_xsk_tx_fill_buf(struct libeth_xdp_tx_frame frm, u32 i, __libeth_xdp_tx_flush_bulk(bq, (flags) | LIBETH_XDP_TX_XSK, prep, \ libeth_xsk_tx_fill_buf, xmit) +/* XSk TMO */ + +/** + * libeth_xsktmo_req_csum - XSk Tx metadata op to request checksum offload + * @csum_start: unused + * @csum_offset: unused + * @priv: &libeth_xdp_tx_desc from the filling helper + * + * Generic implementation of ::tmo_request_checksum. Works only when HW doesn't + * require filling checksum offsets and other parameters beside the checksum + * request bit. + * Consider using within @libeth_xsktmo unless the driver requires HW-specific + * callbacks. + */ +static inline void libeth_xsktmo_req_csum(u16 csum_start, u16 csum_offset, + void *priv) +{ + ((struct libeth_xdp_tx_desc *)priv)->flags |= LIBETH_XDP_TX_CSUM; +} + +/* Only to inline the callbacks below, use @libeth_xsktmo in drivers instead */ +static const struct xsk_tx_metadata_ops __libeth_xsktmo = { + .tmo_request_checksum = libeth_xsktmo_req_csum, +}; + +/** + * __libeth_xsk_xmit_fill_buf_md - internal helper to prepare XSk xmit w/meta + * @xdesc: &xdp_desc from the XSk buffer pool + * @sq: XDPSQ abstraction for the queue + * @priv: XSk Tx metadata ops + * + * Same as __libeth_xsk_xmit_fill_buf(), but requests metadata pointer and + * fills additional fields in &libeth_xdp_tx_desc to ask for metadata offload. + * + * Return: XDP Tx descriptor with the DMA, metadata request bits, and other + * info to pass to the driver callback. + */ +static __always_inline struct libeth_xdp_tx_desc +__libeth_xsk_xmit_fill_buf_md(const struct xdp_desc *xdesc, + const struct libeth_xdpsq *sq, + u64 priv) +{ + const struct xsk_tx_metadata_ops *tmo = libeth_xdp_priv_to_ptr(priv); + struct libeth_xdp_tx_desc desc; + struct xdp_desc_ctx ctx; + + ctx = xsk_buff_raw_get_ctx(sq->pool, xdesc->addr); + desc = (typeof(desc)){ + .addr = ctx.dma, + .len = xdesc->len, + }; + + BUILD_BUG_ON(!__builtin_constant_p(tmo == libeth_xsktmo)); + tmo = tmo == libeth_xsktmo ? &__libeth_xsktmo : tmo; + + xsk_tx_metadata_request(ctx.meta, tmo, &desc); + + return desc; +} + +/* XSk xmit implementation */ + +/** + * __libeth_xsk_xmit_fill_buf - internal helper to prepare XSk xmit w/o meta + * @xdesc: &xdp_desc from the XSk buffer pool + * @sq: XDPSQ abstraction for the queue + * + * Return: XDP Tx descriptor with the DMA and other info to pass to + * the driver callback. + */ +static inline struct libeth_xdp_tx_desc +__libeth_xsk_xmit_fill_buf(const struct xdp_desc *xdesc, + const struct libeth_xdpsq *sq) +{ + return (struct libeth_xdp_tx_desc){ + .addr = xsk_buff_raw_get_dma(sq->pool, xdesc->addr), + .len = xdesc->len, + }; +} + +/** + * libeth_xsk_xmit_fill_buf - internal helper to prepare an XSk xmit + * @frm: &xdp_desc from the XSk buffer pool + * @i: index on the HW queue + * @sq: XDPSQ abstraction for the queue + * @priv: XSk Tx metadata ops + * + * Depending on the metadata ops presence (determined at compile time), calls + * the quickest helper to build a libeth XDP Tx descriptor. + * + * Return: XDP Tx descriptor with the synced DMA, metadata request bits, + * and other info to pass to the driver callback. + */ +static __always_inline struct libeth_xdp_tx_desc +libeth_xsk_xmit_fill_buf(struct libeth_xdp_tx_frame frm, u32 i, + const struct libeth_xdpsq *sq, u64 priv) +{ + struct libeth_xdp_tx_desc desc; + + if (priv) + desc = __libeth_xsk_xmit_fill_buf_md(&frm.desc, sq, priv); + else + desc = __libeth_xsk_xmit_fill_buf(&frm.desc, sq); + + desc.flags |= xsk_is_eop_desc(&frm.desc) ? LIBETH_XDP_TX_LAST : 0; + + xsk_buff_raw_dma_sync_for_device(sq->pool, desc.addr, desc.len); + + return desc; +} + +/** + * libeth_xsk_xmit_do_bulk - send XSk xmit frames + * @pool: XSk buffer pool containing the frames to send + * @xdpsq: opaque pointer to driver's XDPSQ struct + * @budget: maximum number of frames can be sent + * @tmo: optional XSk Tx metadata ops + * @prep: driver callback to build a &libeth_xdpsq + * @xmit: driver callback to put frames to a HW queue + * @finalize: driver callback to start a transmission + * + * Implements generic XSk xmit. Always turns on XSk Tx wakeup as it's assumed + * lazy cleaning is used and interrupts are disabled for the queue. + * HW descriptor filling is unrolled by ``LIBETH_XDP_TX_BATCH`` to optimize + * writes. + * Note that unlike other XDP Tx ops, the queue must be locked and cleaned + * prior to calling this function to already know available @budget. + * @prepare must only build a &libeth_xdpsq and return ``U32_MAX``. + * + * Return: false if @budget was exhausted, true otherwise. + */ +static __always_inline bool +libeth_xsk_xmit_do_bulk(struct xsk_buff_pool *pool, void *xdpsq, u32 budget, + const struct xsk_tx_metadata_ops *tmo, + u32 (*prep)(void *xdpsq, struct libeth_xdpsq *sq), + void (*xmit)(struct libeth_xdp_tx_desc desc, u32 i, + const struct libeth_xdpsq *sq, u64 priv), + void (*finalize)(void *xdpsq, bool sent, bool flush)) +{ + const struct libeth_xdp_tx_frame *bulk; + bool wake; + u32 n; + + wake = xsk_uses_need_wakeup(pool); + if (wake) + xsk_clear_tx_need_wakeup(pool); + + n = xsk_tx_peek_release_desc_batch(pool, budget); + bulk = container_of(&pool->tx_descs[0], typeof(*bulk), desc); + + libeth_xdp_tx_xmit_bulk(bulk, xdpsq, n, true, + libeth_xdp_ptr_to_priv(tmo), prep, + libeth_xsk_xmit_fill_buf, xmit); + finalize(xdpsq, n, true); + + if (wake) + xsk_set_tx_need_wakeup(pool); + + return n < budget; +} + #endif /* __LIBETH_XSK_H */ diff --git a/drivers/net/ethernet/intel/libeth/xdp.c b/drivers/net/ethernet/intel/libeth/xdp.c index 0aed2db00b28..df774994909a 100644 --- a/drivers/net/ethernet/intel/libeth/xdp.c +++ b/drivers/net/ethernet/intel/libeth/xdp.c @@ -374,21 +374,31 @@ EXPORT_SYMBOL_GPL(libeth_xdp_queue_threshold); * __libeth_xdp_set_features - set XDP features for netdev * @dev: &net_device to configure * @xmo: XDP metadata ops (Rx hints) + * @zc_segs: maximum number of S/G frags the HW can transmit + * @tmo: XSk Tx metadata ops (Tx hints) * * Set all the features libeth_xdp supports. Only the first argument is - * necessary. + * necessary; without the third one (zero), XSk support won't be advertised. * Use the non-underscored versions in drivers instead. */ void __libeth_xdp_set_features(struct net_device *dev, - const struct xdp_metadata_ops *xmo) + const struct xdp_metadata_ops *xmo, + u32 zc_segs, + const struct xsk_tx_metadata_ops *tmo) { xdp_set_features_flag(dev, NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | NETDEV_XDP_ACT_NDO_XMIT | + (zc_segs ? NETDEV_XDP_ACT_XSK_ZEROCOPY : 0) | NETDEV_XDP_ACT_RX_SG | NETDEV_XDP_ACT_NDO_XMIT_SG); dev->xdp_metadata_ops = xmo; + + tmo = tmo == libeth_xsktmo ? &libeth_xsktmo_slow : tmo; + + dev->xdp_zc_max_segs = zc_segs ? : 1; + dev->xsk_tx_metadata_ops = zc_segs ? tmo : NULL; } EXPORT_SYMBOL_GPL(__libeth_xdp_set_features); diff --git a/drivers/net/ethernet/intel/libeth/xsk.c b/drivers/net/ethernet/intel/libeth/xsk.c index 7cc47d515d45..c17424c52dd1 100644 --- a/drivers/net/ethernet/intel/libeth/xsk.c +++ b/drivers/net/ethernet/intel/libeth/xsk.c @@ -16,6 +16,12 @@ void __cold libeth_xsk_tx_return_bulk(const struct libeth_xdp_tx_frame *bq, libeth_xsk_buff_free_slow(bq[i].xsk); } +/* XSk TMO */ + +const struct xsk_tx_metadata_ops libeth_xsktmo_slow = { + .tmo_request_checksum = libeth_xsktmo_req_csum, +}; + /* Rx polling path */ /** From patchwork Tue Apr 15 17:28:23 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 14052500 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AD1D1237700; Tue, 15 Apr 2025 17:29:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.17 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744738194; cv=none; b=huiAcs1fFs1uIYxBRFd5D7NdGCZty7g9QKpnQMyR+78Q32CAgvyzcaXrLbSn+p59uRrmjvzUoVJoxznjzO2RqhgLHXLDZsRv3b0HHmHf10zHbTxEYUXZfZkVc58SNmbveWwnCN5SQVkVa4PbWNSegb6Yab+DSZXQjZARxk9Tj9U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744738194; c=relaxed/simple; bh=NcFwBVdNZryCcoAETtoFYTV4parcEiZwMklgO3sKgzE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=oZcm+8sXiz4+aqmhbprit0vWDLAow+6i7yWGjha7TrkH9IV8E8vH85fGmddd+uTIa9u0uq6OiS00wA7PDUXIw+hph0SkXH3IH/OmdrkCXKNtu9D2aMVgAe+fYVUmVIJbu+Et/zChv5WK3OqenkHmeLv/BDAPUQO1+dhDPcoqRy4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=WXlFshM4; arc=none smtp.client-ip=198.175.65.17 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="WXlFshM4" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1744738193; x=1776274193; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=NcFwBVdNZryCcoAETtoFYTV4parcEiZwMklgO3sKgzE=; b=WXlFshM45mdXE0B+FTidomUcVP97toaFtm6Y58s9QRAWQSSNN7HbukeT o3acRjkMNROFXuD4P1+isVenXGOn4LeV0ZWUmFUhKH3LbDEJcYlRSL67h MRwZrFsQZjH9c8BVmHMHVSIV9Nrhobp76EwaLmrfN9xPFuK7gsOD4LrPn O3VSg/Ea6eEsutwHxwoI4z19iuqmNJnRwog7zgtJWIkm8opz6B+HM9I7h Pew1njvmywb0pIgoiIHd4jBVzOzTNUd0gLwsYgLTXEx1jLQEisAaXmMlP Ilk+RCTfDjxGnKOTiWnnZ9JQ5U5IHUl2Up58P5NQkbxnz15a1qN2iW3xN g==; X-CSE-ConnectionGUID: nSYRmrgLRzqYIoIc9H60tw== X-CSE-MsgGUID: OOyV3aByTAm/3/2vpYeqKQ== X-IronPort-AV: E=McAfee;i="6700,10204,11404"; a="46275804" X-IronPort-AV: E=Sophos;i="6.15,213,1739865600"; d="scan'208";a="46275804" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by orvoesa109.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Apr 2025 10:29:53 -0700 X-CSE-ConnectionGUID: ADSukQRUSFO3//k8/WK9YA== X-CSE-MsgGUID: W/PxcA5eTDWWMAH3LOXyTw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,213,1739865600"; d="scan'208";a="130729879" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa010.fm.intel.com with ESMTP; 15 Apr 2025 10:29:48 -0700 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Cc: Alexander Lobakin , Michal Kubiak , Maciej Fijalkowski , Tony Nguyen , Przemek Kitszel , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Simon Horman , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH iwl-next 14/16] libeth: xsk: add XSk Rx processing support Date: Tue, 15 Apr 2025 19:28:23 +0200 Message-ID: <20250415172825.3731091-15-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250415172825.3731091-1-aleksander.lobakin@intel.com> References: <20250415172825.3731091-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Add XSk counterparts for preparing XSk &libeth_xdp_buff (adding head and frags), running the program, and handling the verdict, inc. XDP_PASS. Shortcuts in comparison with regular Rx: frags and all verdicts except XDP_REDIRECT are under unlikely() and out of line; no checks for XDP program presence as it's always true for XSk. Suggested-by: Maciej Fijalkowski # optimizations Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/libeth/priv.h | 3 + include/net/libeth/xdp.h | 17 +- include/net/libeth/xsk.h | 273 +++++++++++++++++++++++ drivers/net/ethernet/intel/libeth/xdp.c | 6 +- drivers/net/ethernet/intel/libeth/xsk.c | 107 +++++++++ 5 files changed, 398 insertions(+), 8 deletions(-) diff --git a/drivers/net/ethernet/intel/libeth/priv.h b/drivers/net/ethernet/intel/libeth/priv.h index 03e74382b2cb..9b811d31015c 100644 --- a/drivers/net/ethernet/intel/libeth/priv.h +++ b/drivers/net/ethernet/intel/libeth/priv.h @@ -8,6 +8,7 @@ /* XDP */ +enum xdp_action; struct libeth_xdp_buff; struct libeth_xdp_tx_frame; struct skb_shared_info; @@ -17,6 +18,8 @@ extern const struct xsk_tx_metadata_ops libeth_xsktmo_slow; void libeth_xsk_tx_return_bulk(const struct libeth_xdp_tx_frame *bq, u32 count); +u32 libeth_xsk_prog_exception(struct libeth_xdp_buff *xdp, enum xdp_action act, + int ret); struct libeth_xdp_ops { void (*bulk)(const struct skb_shared_info *sinfo, diff --git a/include/net/libeth/xdp.h b/include/net/libeth/xdp.h index feb74dc36f1b..85f058482fc7 100644 --- a/include/net/libeth/xdp.h +++ b/include/net/libeth/xdp.h @@ -1115,18 +1115,19 @@ __libeth_xdp_xmit_do_bulk(struct libeth_xdp_tx_bulk *bq, * Should be called on an onstack XDP Tx bulk before the NAPI polling loop. * Initializes all the needed fields to run libeth_xdp functions. If @num == 0, * assumes XDP is not enabled. + * Do not use for XSk, it has its own optimized helper. */ #define libeth_xdp_tx_init_bulk(bq, prog, dev, xdpsqs, num) \ __libeth_xdp_tx_init_bulk(bq, prog, dev, xdpsqs, num, false, \ __UNIQUE_ID(bq_), __UNIQUE_ID(nqs_)) -#define __libeth_xdp_tx_init_bulk(bq, pr, d, xdpsqs, num, ub, un) do { \ +#define __libeth_xdp_tx_init_bulk(bq, pr, d, xdpsqs, num, xsk, ub, un) do { \ typeof(bq) ub = (bq); \ u32 un = (num); \ \ rcu_read_lock(); \ \ - if (un) { \ + if (un || (xsk)) { \ ub->prog = rcu_dereference(pr); \ ub->dev = (d); \ ub->xdpsq = (xdpsqs)[libeth_xdpsq_id(un)]; \ @@ -1152,6 +1153,7 @@ void __libeth_xdp_return_stash(struct libeth_xdp_buff_stash *stash); * * Should be called before the main NAPI polling loop. Loads the content of * the previously saved stash or initializes the buffer from scratch. + * Do not use for XSk. */ static inline void libeth_xdp_init_buff(struct libeth_xdp_buff *dst, @@ -1371,7 +1373,7 @@ __libeth_xdp_run_prog(struct libeth_xdp_buff *xdp, * @flush_bulk: driver callback for flushing a bulk * * Internal inline abstraction to run XDP program and additionally handle - * ``XDP_TX`` verdict. + * ``XDP_TX`` verdict. Used by both XDP and XSk, hence @run and @queue. * Do not use directly. * * Return: libeth_xdp prog verdict depending on the prog's verdict. @@ -1401,12 +1403,13 @@ __libeth_xdp_run_flush(struct libeth_xdp_buff *xdp, } /** - * libeth_xdp_run_prog - run XDP program and handle all verdicts + * libeth_xdp_run_prog - run XDP program (non-XSk path) and handle all verdicts * @xdp: XDP buffer to process * @bq: XDP Tx bulk to queue ``XDP_TX`` buffers * @fl: driver ``XDP_TX`` bulk flush callback * - * Run the attached XDP program and handle all possible verdicts. + * Run the attached XDP program and handle all possible verdicts. XSk has its + * own version. * Prefer using it via LIBETH_XDP_DEFINE_RUN{,_PASS,_PROG}(). * * Return: true if the buffer should be passed up the stack, false if the poll @@ -1428,7 +1431,7 @@ __libeth_xdp_run_flush(struct libeth_xdp_buff *xdp, * @run: driver wrapper to run XDP program * @populate: driver callback to populate an skb with the HW descriptor data * - * Inline abstraction that does the following: + * Inline abstraction that does the following (non-XSk path): * 1) adds frame size and frag number (if needed) to the onstack stats; * 2) fills the descriptor metadata to the onstack &libeth_xdp_buff * 3) runs XDP program if present; @@ -1511,7 +1514,7 @@ static inline void libeth_xdp_prep_desc(struct libeth_xdp_buff *xdp, run, populate) /** - * libeth_xdp_finalize_rx - finalize XDPSQ after a NAPI polling loop + * libeth_xdp_finalize_rx - finalize XDPSQ after a NAPI polling loop (non-XSk) * @bq: ``XDP_TX`` frame bulk * @flush: driver callback to flush the bulk * @finalize: driver callback to start sending the frames and run the timer diff --git a/include/net/libeth/xsk.h b/include/net/libeth/xsk.h index 16ca195981fe..f3f338e566fc 100644 --- a/include/net/libeth/xsk.h +++ b/include/net/libeth/xsk.h @@ -311,4 +311,277 @@ libeth_xsk_xmit_do_bulk(struct xsk_buff_pool *pool, void *xdpsq, u32 budget, return n < budget; } +/* Rx polling path */ + +/** + * libeth_xsk_tx_init_bulk - initialize XDP Tx bulk for an XSk Rx NAPI poll + * @bq: bulk to initialize + * @prog: RCU pointer to the XDP program (never %NULL) + * @dev: target &net_device + * @xdpsqs: array of driver XDPSQ structs + * @num: number of active XDPSQs, the above array length + * + * Should be called on an onstack XDP Tx bulk before the XSk NAPI polling loop. + * Initializes all the needed fields to run libeth_xdp functions. + * Never checks if @prog is %NULL or @num == 0 as XDP must always be enabled + * when hitting this path. + */ +#define libeth_xsk_tx_init_bulk(bq, prog, dev, xdpsqs, num) \ + __libeth_xdp_tx_init_bulk(bq, prog, dev, xdpsqs, num, true, \ + __UNIQUE_ID(bq_), __UNIQUE_ID(nqs_)) + +struct libeth_xdp_buff *libeth_xsk_buff_add_frag(struct libeth_xdp_buff *head, + struct libeth_xdp_buff *xdp); + +/** + * libeth_xsk_process_buff - attach XSk Rx buffer to &libeth_xdp_buff + * @head: head XSk buffer to attach the XSk buffer to (or %NULL) + * @xdp: XSk buffer to process + * @len: received data length from the descriptor + * + * If @head == %NULL, treats the XSk buffer as head and initializes + * the required fields. Otherwise, attaches the buffer as a frag. + * Already performs DMA sync-for-CPU and frame start prefetch + * (for head buffers only). + * + * Return: head XSk buffer on success or if the descriptor must be skipped + * (empty), %NULL if there is no space for a new frag. + */ +static inline struct libeth_xdp_buff * +libeth_xsk_process_buff(struct libeth_xdp_buff *head, + struct libeth_xdp_buff *xdp, u32 len) +{ + if (unlikely(!len)) { + libeth_xsk_buff_free_slow(xdp); + return head; + } + + xsk_buff_set_size(&xdp->base, len); + xsk_buff_dma_sync_for_cpu(&xdp->base); + + if (head) + return libeth_xsk_buff_add_frag(head, xdp); + + prefetch(xdp->data); + + return xdp; +} + +void libeth_xsk_buff_stats_frags(struct libeth_rq_napi_stats *rs, + const struct libeth_xdp_buff *xdp); + +u32 __libeth_xsk_run_prog_slow(struct libeth_xdp_buff *xdp, + const struct libeth_xdp_tx_bulk *bq, + enum xdp_action act, int ret); + +/** + * __libeth_xsk_run_prog - run XDP program on XSk buffer + * @xdp: XSk buffer to run the prog on + * @bq: buffer bulk for ``XDP_TX`` queueing + * + * Internal inline abstraction to run XDP program on XSk Rx path. Handles + * only the most common ``XDP_REDIRECT`` inline, the rest is processed + * externally. + * Reports an XDP prog exception on errors. + * + * Return: libeth_xdp prog verdict depending on the prog's verdict. + */ +static __always_inline u32 +__libeth_xsk_run_prog(struct libeth_xdp_buff *xdp, + const struct libeth_xdp_tx_bulk *bq) +{ + enum xdp_action act; + int ret = 0; + + act = bpf_prog_run_xdp(bq->prog, &xdp->base); + if (unlikely(act != XDP_REDIRECT)) +rest: + return __libeth_xsk_run_prog_slow(xdp, bq, act, ret); + + ret = xdp_do_redirect(bq->dev, &xdp->base, bq->prog); + if (unlikely(ret)) + goto rest; + + return LIBETH_XDP_REDIRECT; +} + +/** + * libeth_xsk_run_prog - run XDP program on XSk path and handle all verdicts + * @xdp: XSk buffer to process + * @bq: XDP Tx bulk to queue ``XDP_TX`` buffers + * @fl: driver ``XDP_TX`` bulk flush callback + * + * Run the attached XDP program and handle all possible verdicts. + * Prefer using it via LIBETH_XSK_DEFINE_RUN{,_PASS,_PROG}(). + * + * Return: libeth_xdp prog verdict depending on the prog's verdict. + */ +#define libeth_xsk_run_prog(xdp, bq, fl) \ + __libeth_xdp_run_flush(xdp, bq, __libeth_xsk_run_prog, \ + libeth_xsk_tx_queue_bulk, fl) + +/** + * __libeth_xsk_run_pass - helper to run XDP program and handle the result + * @xdp: XSk buffer to process + * @bq: XDP Tx bulk to queue ``XDP_TX`` frames + * @napi: NAPI to build an skb and pass it up the stack + * @rs: onstack libeth RQ stats + * @md: metadata that should be filled to the XSk buffer + * @prep: callback for filling the metadata + * @run: driver wrapper to run XDP program + * @populate: driver callback to populate an skb with the HW descriptor data + * + * Inline abstraction, XSk's counterpart of __libeth_xdp_run_pass(), see its + * doc for details. + * + * Return: false if the polling loop must be exited due to lack of free + * buffers, true otherwise. + */ +static __always_inline bool +__libeth_xsk_run_pass(struct libeth_xdp_buff *xdp, + struct libeth_xdp_tx_bulk *bq, struct napi_struct *napi, + struct libeth_rq_napi_stats *rs, const void *md, + void (*prep)(struct libeth_xdp_buff *xdp, + const void *md), + u32 (*run)(struct libeth_xdp_buff *xdp, + struct libeth_xdp_tx_bulk *bq), + bool (*populate)(struct sk_buff *skb, + const struct libeth_xdp_buff *xdp, + struct libeth_rq_napi_stats *rs)) +{ + struct sk_buff *skb; + u32 act; + + rs->bytes += xdp->base.data_end - xdp->data; + rs->packets++; + + if (unlikely(xdp_buff_has_frags(&xdp->base))) + libeth_xsk_buff_stats_frags(rs, xdp); + + if (prep && (!__builtin_constant_p(!!md) || md)) + prep(xdp, md); + + act = run(xdp, bq); + if (likely(act == LIBETH_XDP_REDIRECT)) + return true; + + if (act != LIBETH_XDP_PASS) + return act != LIBETH_XDP_ABORTED; + + skb = xdp_build_skb_from_zc(&xdp->base); + if (unlikely(!skb)) { + libeth_xsk_buff_free_slow(xdp); + return true; + } + + if (unlikely(!populate(skb, xdp, rs))) { + napi_consume_skb(skb, true); + return true; + } + + napi_gro_receive(napi, skb); + + return true; +} + +/** + * libeth_xsk_run_pass - helper to run XDP program and handle the result + * @xdp: XSk buffer to process + * @bq: XDP Tx bulk to queue ``XDP_TX`` frames + * @napi: NAPI to build an skb and pass it up the stack + * @rs: onstack libeth RQ stats + * @desc: pointer to the HW descriptor for that frame + * @run: driver wrapper to run XDP program + * @populate: driver callback to populate an skb with the HW descriptor data + * + * Wrapper around the underscored version when "fill the descriptor metadata" + * means just writing the pointer to the HW descriptor as @xdp->desc. + */ +#define libeth_xsk_run_pass(xdp, bq, napi, rs, desc, run, populate) \ + __libeth_xsk_run_pass(xdp, bq, napi, rs, desc, libeth_xdp_prep_desc, \ + run, populate) + +/** + * libeth_xsk_finalize_rx - finalize XDPSQ after an XSk NAPI polling loop + * @bq: ``XDP_TX`` frame bulk + * @flush: driver callback to flush the bulk + * @finalize: driver callback to start sending the frames and run the timer + * + * Flush the bulk if there are frames left to send, kick the queue and flush + * the XDP maps. + */ +#define libeth_xsk_finalize_rx(bq, flush, finalize) \ + __libeth_xdp_finalize_rx(bq, LIBETH_XDP_TX_XSK, flush, finalize) + +/* + * Helpers to reduce boilerplate code in drivers. + * + * Typical driver XSk Rx flow would be (excl. bulk and buff init, frag attach): + * + * LIBETH_XDP_DEFINE_START(); + * LIBETH_XSK_DEFINE_FLUSH_TX(static driver_xsk_flush_tx, driver_xsk_tx_prep, + * driver_xdp_xmit); + * LIBETH_XSK_DEFINE_RUN(static driver_xsk_run, driver_xsk_run_prog, + * driver_xsk_flush_tx, driver_populate_skb); + * LIBETH_XSK_DEFINE_FINALIZE(static driver_xsk_finalize_rx, + * driver_xsk_flush_tx, driver_xdp_finalize_sq); + * LIBETH_XDP_DEFINE_END(); + * + * This will build a set of 4 static functions. The compiler is free to decide + * whether to inline them. + * Then, in the NAPI polling function: + * + * while (packets < budget) { + * // ... + * if (!driver_xsk_run(xdp, &bq, napi, &rs, desc)) + * break; + * } + * driver_xsk_finalize_rx(&bq); + */ + +/** + * LIBETH_XSK_DEFINE_FLUSH_TX - define a driver XSk ``XDP_TX`` flush function + * @name: name of the function to define + * @prep: driver callback to clean an XDPSQ + * @xmit: driver callback to write a HW Tx descriptor + */ +#define LIBETH_XSK_DEFINE_FLUSH_TX(name, prep, xmit) \ + __LIBETH_XDP_DEFINE_FLUSH_TX(name, prep, xmit, xsk) + +/** + * LIBETH_XSK_DEFINE_RUN_PROG - define a driver XDP program run function + * @name: name of the function to define + * @flush: driver callback to flush an XSk ``XDP_TX`` bulk + */ +#define LIBETH_XSK_DEFINE_RUN_PROG(name, flush) \ + u32 __LIBETH_XDP_DEFINE_RUN_PROG(name, flush, xsk) + +/** + * LIBETH_XSK_DEFINE_RUN_PASS - define a driver buffer process + pass function + * @name: name of the function to define + * @run: driver callback to run XDP program (above) + * @populate: driver callback to fill an skb with HW descriptor info + */ +#define LIBETH_XSK_DEFINE_RUN_PASS(name, run, populate) \ + bool __LIBETH_XDP_DEFINE_RUN_PASS(name, run, populate, xsk) + +/** + * LIBETH_XSK_DEFINE_RUN - define a driver buffer process, run + pass function + * @name: name of the function to define + * @run: name of the XDP prog run function to define + * @flush: driver callback to flush an XSk ``XDP_TX`` bulk + * @populate: driver callback to fill an skb with HW descriptor info + */ +#define LIBETH_XSK_DEFINE_RUN(name, run, flush, populate) \ + __LIBETH_XDP_DEFINE_RUN(name, run, flush, populate, XSK) + +/** + * LIBETH_XSK_DEFINE_FINALIZE - define a driver XSk NAPI poll finalize function + * @name: name of the function to define + * @flush: driver callback to flush an XSk ``XDP_TX`` bulk + * @finalize: driver callback to finalize an XDPSQ and run the timer + */ +#define LIBETH_XSK_DEFINE_FINALIZE(name, flush, finalize) \ + __LIBETH_XDP_DEFINE_FINALIZE(name, flush, finalize, xsk) + #endif /* __LIBETH_XSK_H */ diff --git a/drivers/net/ethernet/intel/libeth/xdp.c b/drivers/net/ethernet/intel/libeth/xdp.c index df774994909a..b84b9041f02e 100644 --- a/drivers/net/ethernet/intel/libeth/xdp.c +++ b/drivers/net/ethernet/intel/libeth/xdp.c @@ -284,7 +284,8 @@ EXPORT_SYMBOL_GPL(libeth_xdp_buff_add_frag); * @act: original XDP prog verdict * @ret: error code if redirect failed * - * External helper used by __libeth_xdp_run_prog(), do not call directly. + * External helper used by __libeth_xdp_run_prog() and + * __libeth_xsk_run_prog_slow(), do not call directly. * Reports invalid @act, XDP exception trace event and frees the buffer. * * Return: libeth_xdp XDP prog verdict. @@ -298,6 +299,9 @@ u32 __cold libeth_xdp_prog_exception(const struct libeth_xdp_tx_bulk *bq, libeth_trace_xdp_exception(bq->dev, bq->prog, act); + if (xdp->base.rxq->mem.type == MEM_TYPE_XSK_BUFF_POOL) + return libeth_xsk_prog_exception(xdp, act, ret); + libeth_xdp_return_buff_slow(xdp); return LIBETH_XDP_DROP; diff --git a/drivers/net/ethernet/intel/libeth/xsk.c b/drivers/net/ethernet/intel/libeth/xsk.c index c17424c52dd1..ecb038f20df5 100644 --- a/drivers/net/ethernet/intel/libeth/xsk.c +++ b/drivers/net/ethernet/intel/libeth/xsk.c @@ -36,3 +36,110 @@ void libeth_xsk_buff_free_slow(struct libeth_xdp_buff *xdp) xsk_buff_free(&xdp->base); } EXPORT_SYMBOL_GPL(libeth_xsk_buff_free_slow); + +/** + * libeth_xsk_buff_add_frag - add frag to XSk Rx buffer + * @head: head buffer + * @xdp: frag buffer + * + * External helper used by libeth_xsk_process_buff(), do not call directly. + * Frees both main and frag buffers on error. + * + * Return: main buffer with attached frag on success, %NULL on error (no space + * for a new frag). + */ +struct libeth_xdp_buff *libeth_xsk_buff_add_frag(struct libeth_xdp_buff *head, + struct libeth_xdp_buff *xdp) +{ + if (!xsk_buff_add_frag(&head->base, &xdp->base)) + goto free; + + return head; + +free: + libeth_xsk_buff_free_slow(xdp); + libeth_xsk_buff_free_slow(head); + + return NULL; +} +EXPORT_SYMBOL_GPL(libeth_xsk_buff_add_frag); + +/** + * libeth_xsk_buff_stats_frags - update onstack RQ stats with XSk frags info + * @rs: onstack stats to update + * @xdp: buffer to account + * + * External helper used by __libeth_xsk_run_pass(), do not call directly. + * Adds buffer's frags count and total len to the onstack stats. + */ +void libeth_xsk_buff_stats_frags(struct libeth_rq_napi_stats *rs, + const struct libeth_xdp_buff *xdp) +{ + libeth_xdp_buff_stats_frags(rs, xdp); +} +EXPORT_SYMBOL_GPL(libeth_xsk_buff_stats_frags); + +/** + * __libeth_xsk_run_prog_slow - process the non-``XDP_REDIRECT`` verdicts + * @xdp: buffer to process + * @bq: Tx bulk for queueing on ``XDP_TX`` + * @act: verdict to process + * @ret: error code if ``XDP_REDIRECT`` failed + * + * External helper used by __libeth_xsk_run_prog(), do not call directly. + * ``XDP_REDIRECT`` is the most common and hottest verdict on XSk, thus + * it is processed inline. The rest goes here for out-of-line processing, + * together with redirect errors. + * + * Return: libeth_xdp XDP prog verdict. + */ +u32 __libeth_xsk_run_prog_slow(struct libeth_xdp_buff *xdp, + const struct libeth_xdp_tx_bulk *bq, + enum xdp_action act, int ret) +{ + switch (act) { + case XDP_DROP: + xsk_buff_free(&xdp->base); + + return LIBETH_XDP_DROP; + case XDP_TX: + return LIBETH_XDP_TX; + case XDP_PASS: + return LIBETH_XDP_PASS; + default: + break; + } + + return libeth_xdp_prog_exception(bq, xdp, act, ret); +} +EXPORT_SYMBOL_GPL(__libeth_xsk_run_prog_slow); + +/** + * libeth_xsk_prog_exception - handle XDP prog exceptions on XSk + * @xdp: buffer to process + * @act: verdict returned by the prog + * @ret: error code if ``XDP_REDIRECT`` failed + * + * Internal. Frees the buffer and, if the queue uses XSk wakeups, stop the + * current NAPI poll when there are no free buffers left. + * + * Return: libeth_xdp's XDP prog verdict. + */ +u32 __cold libeth_xsk_prog_exception(struct libeth_xdp_buff *xdp, + enum xdp_action act, int ret) +{ + const struct xdp_buff_xsk *xsk; + u32 __ret = LIBETH_XDP_DROP; + + if (act != XDP_REDIRECT) + goto drop; + + xsk = container_of(&xdp->base, typeof(*xsk), xdp); + if (xsk_uses_need_wakeup(xsk->pool) && ret == -ENOBUFS) + __ret = LIBETH_XDP_ABORTED; + +drop: + libeth_xsk_buff_free_slow(xdp); + + return __ret; +} From patchwork Tue Apr 15 17:28:24 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 14052501 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B04A3238167; Tue, 15 Apr 2025 17:29:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.17 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744738198; cv=none; b=ob6kqIZBxVQW2gmCXd6fa3GDAAqk6qx0m85eKJ/9gu3N+XqrUqaojLYMlLMkFEYKCuUY8wL89MbIVcuTEVhWxN+9XA2Ct2d/OiHNKXONF3wSuUFBXX41Krlo564vTwB/8b0twXCqxFub6Mlez5KL8ldQgAVV/SVFB5k71QKkiic= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744738198; c=relaxed/simple; bh=zQGudSTedXdsYMOds2lX3WMuu6HvdX0YUUhoa0vfzl4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=EkDqoSF9pAG5BTRA2CJfn1xOpoy5iKM58BuR3+xpegNCWeyDyAsBpLhcDPwOQb/DNZsNgy27SBIzJI/n8edz6VvG/AzH5K97JK0fKJq2hSqYkdx6bpOKAptz375EJ0B1vu7Ld3nN0KNKWbOlQjMOjlei67wghwJDqcx84N2Lz7o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=ecEqm8Qs; arc=none smtp.client-ip=198.175.65.17 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="ecEqm8Qs" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1744738197; x=1776274197; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=zQGudSTedXdsYMOds2lX3WMuu6HvdX0YUUhoa0vfzl4=; b=ecEqm8QsC3rcqo3BEAUt71TRSob3V0UnoOQrwVMA6M9nX5HlMJzYzQhN x2DFYck08evbjWNTQHhJ2Q0gnAGM7yUb25Jnio01ZjsvOE8lWLe368bWx ajpc3/eCFnQ2qUQ06rE3llIuQa3Ra1g0gWnGgOjxSekjCQEqWWPRDHZnJ 6B9T9iK7uvt0YRSKur+fkQ+RLZ/w7H6VxmFvlnmEtgXyE7Pks9Foi9vTo XE4Foy1usvGURxZGU9YQeqEQTkgJAmBJEWzRO4mI1ruNOQCT28d2cf/GX eaD3+X5fKpRM38suuLYC/1GOwfU82o3V0JzLAxAbGoWzK+cEFf54kZRGz w==; X-CSE-ConnectionGUID: BOK0NYvKQXablLtuZ/JuFw== X-CSE-MsgGUID: UxGbScenQJugIs3YyIpAUg== X-IronPort-AV: E=McAfee;i="6700,10204,11404"; a="46275815" X-IronPort-AV: E=Sophos;i="6.15,213,1739865600"; d="scan'208";a="46275815" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by orvoesa109.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Apr 2025 10:29:57 -0700 X-CSE-ConnectionGUID: hmAMpvMHRGWFBWG+1tobeg== X-CSE-MsgGUID: sCe0nvVFSfKxP5Ga7n+vuA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,213,1739865600"; d="scan'208";a="130729884" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa010.fm.intel.com with ESMTP; 15 Apr 2025 10:29:53 -0700 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Cc: Alexander Lobakin , Michal Kubiak , Maciej Fijalkowski , Tony Nguyen , Przemek Kitszel , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Simon Horman , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH iwl-next 15/16] libeth: xsk: add XSkFQ refill and XSk wakeup helpers Date: Tue, 15 Apr 2025 19:28:24 +0200 Message-ID: <20250415172825.3731091-16-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250415172825.3731091-1-aleksander.lobakin@intel.com> References: <20250415172825.3731091-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org XSkFQ refill is pretty generic across the drivers minus FQ descriptor filling and can easily be unified with one inline callback. XSk wakeup is usually not, but here, instead of commonly used "SW interrupts", I picked firing an IPI. In most tests, it showed better performance; it also provides better control for userspace on which CPU will handle the xmit, as SW interrupts honor IRQ affinity no matter which core produces XSk xmit descs (while XDPSQs are associated 1:1 with cores having the same ID). Signed-off-by: Alexander Lobakin --- include/net/libeth/xsk.h | 98 +++++++++++++++++++ drivers/net/ethernet/intel/libeth/xsk.c | 124 ++++++++++++++++++++++++ 2 files changed, 222 insertions(+) diff --git a/include/net/libeth/xsk.h b/include/net/libeth/xsk.h index f3f338e566fc..213778a68476 100644 --- a/include/net/libeth/xsk.h +++ b/include/net/libeth/xsk.h @@ -584,4 +584,102 @@ __libeth_xsk_run_pass(struct libeth_xdp_buff *xdp, #define LIBETH_XSK_DEFINE_FINALIZE(name, flush, finalize) \ __LIBETH_XDP_DEFINE_FINALIZE(name, flush, finalize, xsk) +/* Refilling */ + +/** + * struct libeth_xskfq - structure representing an XSk buffer (fill) queue + * @fp: hotpath part of the structure + * @pool: &xsk_buff_pool for buffer management + * @fqes: array of XSk buffer pointers + * @descs: opaque pointer to the HW descriptor array + * @ntu: index of the next buffer to poll + * @count: number of descriptors/buffers the queue has + * @pending: current number of XSkFQEs to refill + * @thresh: threshold below which the queue is refilled + * @buf_len: HW-writeable length per each buffer + * @nid: ID of the closest NUMA node with memory + */ +struct libeth_xskfq { + struct_group_tagged(libeth_xskfq_fp, fp, + struct xsk_buff_pool *pool; + struct libeth_xdp_buff **fqes; + void *descs; + + u32 ntu; + u32 count; + ); + + /* Cold fields */ + u32 pending; + u32 thresh; + + u32 buf_len; + int nid; +}; + +int libeth_xskfq_create(struct libeth_xskfq *fq); +void libeth_xskfq_destroy(struct libeth_xskfq *fq); + +/** + * libeth_xsk_buff_xdp_get_dma - get DMA address of XSk &libeth_xdp_buff + * @xdp: buffer to get the DMA addr for + */ +#define libeth_xsk_buff_xdp_get_dma(xdp) \ + xsk_buff_xdp_get_dma(&(xdp)->base) + +/** + * libeth_xskfqe_alloc - allocate @n XSk Rx buffers + * @fq: hotpath part of the XSkFQ, usually onstack + * @n: number of buffers to allocate + * @fill: driver callback to write DMA addresses to HW descriptors + * + * Note that @fq->ntu gets updated, but ::pending must be recalculated + * by the caller. + * + * Return: number of buffers refilled. + */ +static __always_inline u32 +libeth_xskfqe_alloc(struct libeth_xskfq_fp *fq, u32 n, + void (*fill)(const struct libeth_xskfq_fp *fq, u32 i)) +{ + u32 this, ret, done = 0; + struct xdp_buff **xskb; + + this = fq->count - fq->ntu; + if (likely(this > n)) + this = n; + +again: + xskb = (typeof(xskb))&fq->fqes[fq->ntu]; + ret = xsk_buff_alloc_batch(fq->pool, xskb, this); + + for (u32 i = 0, ntu = fq->ntu; likely(i < ret); i++) + fill(fq, ntu + i); + + done += ret; + fq->ntu += ret; + + if (likely(fq->ntu < fq->count) || unlikely(ret < this)) + goto out; + + fq->ntu = 0; + + if (this < n) { + this = n - this; + goto again; + } + +out: + return done; +} + +/* .ndo_xsk_wakeup */ + +void libeth_xsk_init_wakeup(call_single_data_t *csd, struct napi_struct *napi); +void libeth_xsk_wakeup(call_single_data_t *csd, u32 qid); + +/* Pool setup */ + +int libeth_xsk_setup_pool(struct net_device *dev, u32 qid, bool enable); + #endif /* __LIBETH_XSK_H */ diff --git a/drivers/net/ethernet/intel/libeth/xsk.c b/drivers/net/ethernet/intel/libeth/xsk.c index ecb038f20df5..9a510a509dcd 100644 --- a/drivers/net/ethernet/intel/libeth/xsk.c +++ b/drivers/net/ethernet/intel/libeth/xsk.c @@ -143,3 +143,127 @@ u32 __cold libeth_xsk_prog_exception(struct libeth_xdp_buff *xdp, return __ret; } + +/* Refill */ + +/** + * libeth_xskfq_create - create an XSkFQ + * @fq: fill queue to initialize + * + * Allocates the FQEs and initializes the fields used by libeth_xdp: number + * of buffers to refill, refill threshold and buffer len. + * + * Return: %0 on success, -errno otherwise. + */ +int libeth_xskfq_create(struct libeth_xskfq *fq) +{ + fq->fqes = kvcalloc_node(fq->count, sizeof(*fq->fqes), GFP_KERNEL, + fq->nid); + if (!fq->fqes) + return -ENOMEM; + + fq->pending = fq->count; + fq->thresh = libeth_xdp_queue_threshold(fq->count); + fq->buf_len = xsk_pool_get_rx_frame_size(fq->pool); + + return 0; +} +EXPORT_SYMBOL_GPL(libeth_xskfq_create); + +/** + * libeth_xskfq_destroy - destroy an XSkFQ + * @fq: fill queue to destroy + * + * Zeroes the used fields and frees the FQEs array. + */ +void libeth_xskfq_destroy(struct libeth_xskfq *fq) +{ + fq->buf_len = 0; + fq->thresh = 0; + fq->pending = 0; + + kvfree(fq->fqes); +} +EXPORT_SYMBOL_GPL(libeth_xskfq_destroy); + +/* .ndo_xsk_wakeup */ + +static void libeth_xsk_napi_sched(void *info) +{ + __napi_schedule_irqoff(info); +} + +/** + * libeth_xsk_init_wakeup - initialize libeth XSk wakeup structure + * @csd: struct to initialize + * @napi: NAPI corresponding to this queue + * + * libeth_xdp uses inter-processor interrupts to perform XSk wakeups. In order + * to do that, the corresponding CSDs must be initialized when creating the + * queues. + */ +void libeth_xsk_init_wakeup(call_single_data_t *csd, struct napi_struct *napi) +{ + INIT_CSD(csd, libeth_xsk_napi_sched, napi); +} +EXPORT_SYMBOL_GPL(libeth_xsk_init_wakeup); + +/** + * libeth_xsk_wakeup - perform an XSk wakeup + * @csd: CSD corresponding to the queue + * @qid: the stack queue index + * + * Try to mark the NAPI as missed first, so that it could be rescheduled. + * If it's not, schedule it on the corresponding CPU using IPIs (or directly + * if already running on it). + */ +void libeth_xsk_wakeup(call_single_data_t *csd, u32 qid) +{ + struct napi_struct *napi = csd->info; + + if (napi_if_scheduled_mark_missed(napi) || + unlikely(!napi_schedule_prep(napi))) + return; + + if (unlikely(qid >= nr_cpu_ids)) + qid %= nr_cpu_ids; + + if (qid != raw_smp_processor_id() && cpu_online(qid)) + smp_call_function_single_async(qid, csd); + else + __napi_schedule(napi); +} +EXPORT_SYMBOL_GPL(libeth_xsk_wakeup); + +/* Pool setup */ + +#define LIBETH_XSK_DMA_ATTR \ + (DMA_ATTR_WEAK_ORDERING | DMA_ATTR_SKIP_CPU_SYNC) + +/** + * libeth_xsk_setup_pool - setup or destroy an XSk pool for a queue + * @dev: target &net_device + * @qid: stack queue index to configure + * @enable: whether to enable or disable the pool + * + * Check that @qid is valid and then map or unmap the pool. + * + * Return: %0 on success, -errno otherwise. + */ +int libeth_xsk_setup_pool(struct net_device *dev, u32 qid, bool enable) +{ + struct xsk_buff_pool *pool; + + pool = xsk_get_pool_from_qid(dev, qid); + if (!pool) + return -EINVAL; + + if (enable) + return xsk_pool_dma_map(pool, dev->dev.parent, + LIBETH_XSK_DMA_ATTR); + else + xsk_pool_dma_unmap(pool, LIBETH_XSK_DMA_ATTR); + + return 0; +} +EXPORT_SYMBOL_GPL(libeth_xsk_setup_pool); From patchwork Tue Apr 15 17:28:25 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 14052502 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0257D238C09; Tue, 15 Apr 2025 17:30:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.17 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744738202; cv=none; b=RpQhnjkTZwXF5rkh88oA5XJKIxBoyKPoaZ8EuMB8FVdVMtQeFHihryoWpLCvtCQv0n+k0yMoXB1jEiBL1Uai4GG4m6DLpvisEvV1jCLt7LJ9rBKIjpwvmqjree9rQpNlPi8owxNPNQ89ELN9YL80arzmKiZkRpp3UQ/o0r4zF6Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744738202; c=relaxed/simple; bh=T7zvRwYUz24J7zyCwHHnzE4WhBWvlthCh4Hi7fmj6Mo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=CTHBcmjTiyylrM3b3wGHdwEj6jKLOscqTKN8SH6jiDuInHRjVORR+KsnR4WyciLTrBuXOEnGVhFtW/npGvdUI9hwKLRQCOA1LeSrMDSkPCdLK9xPG/pA0mM3d3XAtqWs3aKDOKc+/sjK/1Jikj28WDyHdZ7gXMlkexwXpNckX6U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=afLdpFAw; arc=none smtp.client-ip=198.175.65.17 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="afLdpFAw" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1744738201; x=1776274201; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=T7zvRwYUz24J7zyCwHHnzE4WhBWvlthCh4Hi7fmj6Mo=; b=afLdpFAwNBgGrX6u3CuCBbegQ5tkk+32620Cz1sttnASlDxn8mQBvngk e7M858HdWdZ8SW8qFDkCEdJsB/9ei7WCZPiBVyQ+MZQlyvKgon3gpV6se cjiJy+uiqS1JV88/cQ5KE9S4T5xW5erpb/7ZgPq/RrB2ePkNGIZDCH6Qv Mt/vG7FLZSZzPkghzcG/SyzZKS6X/TBYWQGjzVE3iVaT3QJhlC/7ZZd/H fWCacraky3SdvX+yvvPK2iOl6d6Kxa+VadOpVT/AD9r/wsSj0AhWqx+Al hnRjOC2Fsqc3cyhNK+/DktSVIpJ91GkxDO0yttoLyYetCReSKfnThnJnD Q==; X-CSE-ConnectionGUID: n4HUb78ESUGSz6V4afYg5Q== X-CSE-MsgGUID: rICofsqnSAeetmIwLtOqig== X-IronPort-AV: E=McAfee;i="6700,10204,11404"; a="46275829" X-IronPort-AV: E=Sophos;i="6.15,213,1739865600"; d="scan'208";a="46275829" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by orvoesa109.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Apr 2025 10:30:01 -0700 X-CSE-ConnectionGUID: eGFNhxI4SMagQ5uVy8xXWQ== X-CSE-MsgGUID: K+Iyi2CpRyayfvmeNdNDmA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,213,1739865600"; d="scan'208";a="130729927" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa010.fm.intel.com with ESMTP; 15 Apr 2025 10:29:58 -0700 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Cc: Alexander Lobakin , Michal Kubiak , Maciej Fijalkowski , Tony Nguyen , Przemek Kitszel , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Simon Horman , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH iwl-next 16/16] libeth: xdp, xsk: access adjacent u32s as u64 where applicable Date: Tue, 15 Apr 2025 19:28:25 +0200 Message-ID: <20250415172825.3731091-17-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250415172825.3731091-1-aleksander.lobakin@intel.com> References: <20250415172825.3731091-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org On 64-bit systems, writing/reading one u64 is faster than two u32s even when they're are adjacent in a struct. The compilers won't guarantee they will combine those; I observed both successful and unsuccessful attempts with both GCC and Clang, and it's not easy to say what it depends on. There's a few places in libeth_xdp winning up to several percent from combined access (both performance and object code size, especially when unrolling). Add __LIBETH_WORD_ACCESS and use it there on LE. Drivers are free to optimize HW-specific callbacks under the same definition. Signed-off-by: Alexander Lobakin --- include/net/libeth/xdp.h | 29 ++++++++++++++++++++++++++--- include/net/libeth/xsk.h | 10 +++++----- 2 files changed, 31 insertions(+), 8 deletions(-) diff --git a/include/net/libeth/xdp.h b/include/net/libeth/xdp.h index 85f058482fc7..6d15386aff31 100644 --- a/include/net/libeth/xdp.h +++ b/include/net/libeth/xdp.h @@ -464,6 +464,21 @@ struct libeth_xdp_tx_desc { ((const void *)(uintptr_t)(priv)); \ }) +/* + * On 64-bit systems, assigning one u64 is faster than two u32s. When ::len + * occupies lowest 32 bits (LE), whole ::opts can be assigned directly instead. + */ +#ifdef __LITTLE_ENDIAN +#define __LIBETH_WORD_ACCESS 1 +#endif +#ifdef __LIBETH_WORD_ACCESS +#define __libeth_xdp_tx_len(flen, ...) \ + .opts = ((flen) | FIELD_PREP(GENMASK_ULL(63, 32), (__VA_ARGS__ + 0))) +#else +#define __libeth_xdp_tx_len(flen, ...) \ + .len = (flen), .flags = (__VA_ARGS__ + 0) +#endif + /** * libeth_xdp_tx_xmit_bulk - main XDP Tx function * @bulk: array of frames to send @@ -863,8 +878,7 @@ static inline u32 libeth_xdp_xmit_queue_head(struct libeth_xdp_tx_bulk *bq, bq->bulk[bq->count++] = (typeof(*bq->bulk)){ .xdpf = xdpf, - .len = xdpf->len, - .flags = LIBETH_XDP_TX_FIRST, + __libeth_xdp_tx_len(xdpf->len, LIBETH_XDP_TX_FIRST), }; if (!xdp_frame_has_frags(xdpf)) @@ -895,7 +909,7 @@ static inline bool libeth_xdp_xmit_queue_frag(struct libeth_xdp_tx_bulk *bq, bq->bulk[bq->count++] = (typeof(*bq->bulk)){ .dma = dma, - .len = skb_frag_size(frag), + __libeth_xdp_tx_len(skb_frag_size(frag)), }; return true; @@ -1253,6 +1267,7 @@ bool libeth_xdp_buff_add_frag(struct libeth_xdp_buff *xdp, * Internal, use libeth_xdp_process_buff() instead. Initializes XDP buffer * head with the Rx buffer data: data pointer, length, headroom, and * truesize/tailroom. Zeroes the flags. + * Uses faster single u64 write instead of per-field access. */ static inline void libeth_xdp_prepare_buff(struct libeth_xdp_buff *xdp, const struct libeth_fqe *fqe, @@ -1260,7 +1275,15 @@ static inline void libeth_xdp_prepare_buff(struct libeth_xdp_buff *xdp, { const struct page *page = __netmem_to_page(fqe->netmem); +#ifdef __LIBETH_WORD_ACCESS + static_assert(offsetofend(typeof(xdp->base), flags) - + offsetof(typeof(xdp->base), frame_sz) == + sizeof(u64)); + + *(u64 *)&xdp->base.frame_sz = fqe->truesize; +#else xdp_init_buff(&xdp->base, fqe->truesize, xdp->base.rxq); +#endif xdp_prepare_buff(&xdp->base, page_address(page) + fqe->offset, page->pp->p.offset, len, true); } diff --git a/include/net/libeth/xsk.h b/include/net/libeth/xsk.h index 213778a68476..481a7b28e6f2 100644 --- a/include/net/libeth/xsk.h +++ b/include/net/libeth/xsk.h @@ -26,8 +26,8 @@ static inline bool libeth_xsk_tx_queue_head(struct libeth_xdp_tx_bulk *bq, { bq->bulk[bq->count++] = (typeof(*bq->bulk)){ .xsk = xdp, - .len = xdp->base.data_end - xdp->data, - .flags = LIBETH_XDP_TX_FIRST, + __libeth_xdp_tx_len(xdp->base.data_end - xdp->data, + LIBETH_XDP_TX_FIRST), }; if (likely(!xdp_buff_has_frags(&xdp->base))) @@ -48,7 +48,7 @@ static inline void libeth_xsk_tx_queue_frag(struct libeth_xdp_tx_bulk *bq, { bq->bulk[bq->count++] = (typeof(*bq->bulk)){ .xsk = frag, - .len = frag->base.data_end - frag->data, + __libeth_xdp_tx_len(frag->base.data_end - frag->data), }; } @@ -199,7 +199,7 @@ __libeth_xsk_xmit_fill_buf_md(const struct xdp_desc *xdesc, ctx = xsk_buff_raw_get_ctx(sq->pool, xdesc->addr); desc = (typeof(desc)){ .addr = ctx.dma, - .len = xdesc->len, + __libeth_xdp_tx_len(xdesc->len), }; BUILD_BUG_ON(!__builtin_constant_p(tmo == libeth_xsktmo)); @@ -226,7 +226,7 @@ __libeth_xsk_xmit_fill_buf(const struct xdp_desc *xdesc, { return (struct libeth_xdp_tx_desc){ .addr = xsk_buff_raw_get_dma(sq->pool, xdesc->addr), - .len = xdesc->len, + __libeth_xdp_tx_len(xdesc->len), }; }