From patchwork Wed Jan 15 15:18:54 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13940538 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6AE2F14AD3D; Wed, 15 Jan 2025 15:19:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736954387; cv=none; b=dpzSvLqs+zoV/cF+BBXgOvvli01Z1Dd7bK+UUXYqBHDNQtJIEREpaqxwwhQNepdJe69Nf8lGNhCW4TCnV/NM5Xku0s5fChAXNQ6g3FrhVA6n2QCrUPz8hGnCxiFGQtnIYJYLm0D75lcNMGJTPNsvV6SoGLaCZEbYr4p5MXKTsg4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736954387; c=relaxed/simple; bh=ELcoXK68aSXbbHgV3tD6TZPzIw7K0wI0s+JGWugqMRY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=UBDHfgLVBra3bEuGpiU7AwMZ9MBi3AgvQ3OFTjbsPkzks5+n4U8nCZF9SJlE2ufYFYCfM4hs19iWQ0UuVEXMsEANWTHB5mry/gVtF7fkf5VGEpG1ljp09CmPSP/DWKLe6+0h+VTdhDmj9psceE3erqNFEvQtex0U1xHes9hZBs4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=VCyoZP75; arc=none smtp.client-ip=192.198.163.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="VCyoZP75" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1736954386; x=1768490386; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ELcoXK68aSXbbHgV3tD6TZPzIw7K0wI0s+JGWugqMRY=; b=VCyoZP754BgZ24YG1+BBOruatlLXupfJLbbyonARmvWO5YYRV6TcVZ4u +AyvlRy2AykSeIZcwZgfrmtTVhYQebQ0QcTTov91yjVShBy60GyHEP9Pk GyMukpyKS2B4F7Muj02hgjzpA4p/ZDopf4F39jzPIs0QavnSoXZUrLrS3 SEie6IcDwMgceWKRvEWL0ng5X87onE7fC1EAWNxI/IotRAlrs3RV9jtAD REGDHIia2nrjyCKzNW9mJSHavi26jNlrUaBTnzVCMSPq5dG03gXgQ4lGP Z0or4h7etFPZP5S2GdeuaYApOFQPyUgg5bq0WpfWaAL/Dq7Zk11plv+7V w==; X-CSE-ConnectionGUID: MMTCRVbwRoCJEfmP3/QvKA== X-CSE-MsgGUID: AY8DkZpFRTGCJczaN8d6JQ== X-IronPort-AV: E=McAfee;i="6700,10204,11316"; a="37451768" X-IronPort-AV: E=Sophos;i="6.13,206,1732608000"; d="scan'208";a="37451768" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jan 2025 07:19:45 -0800 X-CSE-ConnectionGUID: sOstA/wkQ0WFMH0dHFLRSA== X-CSE-MsgGUID: U0GMibriQXSRKbLKRN35Zw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,206,1732608000"; d="scan'208";a="105116648" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa007.fm.intel.com with ESMTP; 15 Jan 2025 07:19:41 -0800 From: Alexander Lobakin To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Lorenzo Bianconi , Daniel Xu , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , John Fastabend , =?utf-8?q?Toke_H=C3=B8iland-J?= =?utf-8?q?=C3=B8rgensen?= , Jesper Dangaard Brouer , Martin KaFai Lau , netdev@vger.kernel.org, bpf@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v3 1/8] net: gro: decouple GRO from the NAPI layer Date: Wed, 15 Jan 2025 16:18:54 +0100 Message-ID: <20250115151901.2063909-2-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.48.0 In-Reply-To: <20250115151901.2063909-1-aleksander.lobakin@intel.com> References: <20250115151901.2063909-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org In fact, these two are not tied closely to each other. The only requirements to GRO are to use it in the BH context and have some sane limits on the packet batches, e.g. NAPI has a limit of its budget (64/8/etc.). Move purely GRO fields into a new tagged group, &gro_node. Embed it into &napi_struct and adjust all the references. napi_id doesn't really belong to GRO, but: 1. struct gro_node has a 4-byte padding at the end anyway. If you leave napi_id outside, struct napi_struct takes additional 8 bytes (u32 napi_id + another 4-byte padding). 2. gro_receive_skb() uses it to mark skbs. We don't want to split it into two functions or add an `if`, as this would be less efficient, but we need it to be NAPI-independent. The current approach doesn't change anything for NAPI-backed GROs; for standalone ones (which are less important currently), the embedded napi_id will be just zero => no-op. Three Ethernet drivers use napi_gro_flush() not really meant to be exported, so move it to and add that include there. napi_gro_receive() is used in more than 100 drivers, keep it in . This does not make GRO ready to use outside of the NAPI context yet. Signed-off-by: Alexander Lobakin Tested-by: Daniel Xu --- include/linux/netdevice.h | 26 +++++--- include/net/busy_poll.h | 11 +++- include/net/gro.h | 35 +++++++---- drivers/net/ethernet/brocade/bna/bnad.c | 1 + drivers/net/ethernet/cortina/gemini.c | 1 + drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c | 1 + net/core/dev.c | 60 ++++++++----------- net/core/gro.c | 69 +++++++++++----------- 8 files changed, 112 insertions(+), 92 deletions(-) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index bced03fb349e..f04116ecf475 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -339,8 +339,8 @@ struct gro_list { }; /* - * size of gro hash buckets, must less than bit number of - * napi_struct::gro_bitmask + * size of gro hash buckets, must be <= the number of bits in + * gro_node::bitmask */ #define GRO_HASH_BUCKETS 8 @@ -369,7 +369,6 @@ struct napi_struct { unsigned long state; int weight; u32 defer_hard_irqs_count; - unsigned long gro_bitmask; int (*poll)(struct napi_struct *, int); #ifdef CONFIG_NETPOLL /* CPU actively polling if netpoll is configured */ @@ -378,11 +377,14 @@ struct napi_struct { /* CPU on which NAPI has been scheduled for processing */ int list_owner; struct net_device *dev; - struct gro_list gro_hash[GRO_HASH_BUCKETS]; + struct_group_tagged(gro_node, gro, + unsigned long bitmask; + struct gro_list hash[GRO_HASH_BUCKETS]; + struct list_head rx_list; + int rx_count; + u32 napi_id; + ); struct sk_buff *skb; - struct list_head rx_list; /* Pending GRO_NORMAL skbs */ - int rx_count; /* length of rx_list */ - unsigned int napi_id; struct hrtimer timer; struct task_struct *thread; unsigned long gro_flush_timeout; @@ -4013,8 +4015,14 @@ int netif_receive_skb(struct sk_buff *skb); int netif_receive_skb_core(struct sk_buff *skb); void netif_receive_skb_list_internal(struct list_head *head); void netif_receive_skb_list(struct list_head *head); -gro_result_t napi_gro_receive(struct napi_struct *napi, struct sk_buff *skb); -void napi_gro_flush(struct napi_struct *napi, bool flush_old); +gro_result_t gro_receive_skb(struct gro_node *gro, struct sk_buff *skb); + +static inline gro_result_t napi_gro_receive(struct napi_struct *napi, + struct sk_buff *skb) +{ + return gro_receive_skb(&napi->gro, skb); +} + struct sk_buff *napi_get_frags(struct napi_struct *napi); void napi_get_frags_check(struct napi_struct *napi); gro_result_t napi_gro_frags(struct napi_struct *napi); diff --git a/include/net/busy_poll.h b/include/net/busy_poll.h index c858270141bc..d31c8cb9e578 100644 --- a/include/net/busy_poll.h +++ b/include/net/busy_poll.h @@ -122,18 +122,23 @@ static inline void sk_busy_loop(struct sock *sk, int nonblock) } /* used in the NIC receive handler to mark the skb */ -static inline void skb_mark_napi_id(struct sk_buff *skb, - struct napi_struct *napi) +static inline void __skb_mark_napi_id(struct sk_buff *skb, u32 napi_id) { #ifdef CONFIG_NET_RX_BUSY_POLL /* If the skb was already marked with a valid NAPI ID, avoid overwriting * it. */ if (skb->napi_id < MIN_NAPI_ID) - skb->napi_id = napi->napi_id; + skb->napi_id = napi_id; #endif } +static inline void skb_mark_napi_id(struct sk_buff *skb, + struct napi_struct *napi) +{ + __skb_mark_napi_id(skb, napi->napi_id); +} + /* used in the protocol handler to propagate the napi_id to the socket */ static inline void sk_mark_napi_id(struct sock *sk, const struct sk_buff *skb) { diff --git a/include/net/gro.h b/include/net/gro.h index b9b58c1f8d19..7aad366452d6 100644 --- a/include/net/gro.h +++ b/include/net/gro.h @@ -506,26 +506,41 @@ static inline int gro_receive_network_flush(const void *th, const void *th2, int skb_gro_receive(struct sk_buff *p, struct sk_buff *skb); int skb_gro_receive_list(struct sk_buff *p, struct sk_buff *skb); +void __gro_flush(struct gro_node *gro, bool flush_old); + +static inline void gro_flush(struct gro_node *gro, bool flush_old) +{ + if (!gro->bitmask) + return; + + __gro_flush(gro, flush_old); +} + +static inline void napi_gro_flush(struct napi_struct *napi, bool flush_old) +{ + gro_flush(&napi->gro, flush_old); +} /* Pass the currently batched GRO_NORMAL SKBs up to the stack. */ -static inline void gro_normal_list(struct napi_struct *napi) +static inline void gro_normal_list(struct gro_node *gro) { - if (!napi->rx_count) + if (!gro->rx_count) return; - netif_receive_skb_list_internal(&napi->rx_list); - INIT_LIST_HEAD(&napi->rx_list); - napi->rx_count = 0; + netif_receive_skb_list_internal(&gro->rx_list); + INIT_LIST_HEAD(&gro->rx_list); + gro->rx_count = 0; } /* Queue one GRO_NORMAL SKB up for list processing. If batch size exceeded, * pass the whole batch up to the stack. */ -static inline void gro_normal_one(struct napi_struct *napi, struct sk_buff *skb, int segs) +static inline void gro_normal_one(struct gro_node *gro, struct sk_buff *skb, + int segs) { - list_add_tail(&skb->list, &napi->rx_list); - napi->rx_count += segs; - if (napi->rx_count >= READ_ONCE(net_hotdata.gro_normal_batch)) - gro_normal_list(napi); + list_add_tail(&skb->list, &gro->rx_list); + gro->rx_count += segs; + if (gro->rx_count >= READ_ONCE(net_hotdata.gro_normal_batch)) + gro_normal_list(gro); } /* This function is the alternative of 'inet_iif' and 'inet_sdif' diff --git a/drivers/net/ethernet/brocade/bna/bnad.c b/drivers/net/ethernet/brocade/bna/bnad.c index ece6f3b48327..3b9107003b00 100644 --- a/drivers/net/ethernet/brocade/bna/bnad.c +++ b/drivers/net/ethernet/brocade/bna/bnad.c @@ -19,6 +19,7 @@ #include #include #include +#include #include "bnad.h" #include "bna.h" diff --git a/drivers/net/ethernet/cortina/gemini.c b/drivers/net/ethernet/cortina/gemini.c index 991e3839858b..1f8067bdd61a 100644 --- a/drivers/net/ethernet/cortina/gemini.c +++ b/drivers/net/ethernet/cortina/gemini.c @@ -40,6 +40,7 @@ #include #include #include +#include #include "gemini.h" diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c index 7a9c09cd4fdc..6a7a26085fc7 100644 --- a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c +++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c @@ -41,6 +41,7 @@ #include #include #include +#include #include "t7xx_dpmaif.h" #include "t7xx_hif_dpmaif.h" diff --git a/net/core/dev.c b/net/core/dev.c index fda4e1039bf0..afa5e6e7eb3f 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -6288,7 +6288,7 @@ bool napi_complete_done(struct napi_struct *n, int work_done) return false; if (work_done) { - if (n->gro_bitmask) + if (n->gro.bitmask) timeout = napi_get_gro_flush_timeout(n); n->defer_hard_irqs_count = napi_get_defer_hard_irqs(n); } @@ -6298,15 +6298,14 @@ bool napi_complete_done(struct napi_struct *n, int work_done) if (timeout) ret = false; } - if (n->gro_bitmask) { - /* When the NAPI instance uses a timeout and keeps postponing - * it, we need to bound somehow the time packets are kept in - * the GRO layer - */ - napi_gro_flush(n, !!timeout); - } - gro_normal_list(n); + /* + * When the NAPI instance uses a timeout and keeps postponing + * it, we need to bound somehow the time packets are kept in + * the GRO layer. + */ + gro_flush(&n->gro, !!timeout); + gro_normal_list(&n->gro); if (unlikely(!list_empty(&n->poll_list))) { /* If n->poll_list is not empty, we need to mask irqs */ @@ -6370,19 +6369,15 @@ static void skb_defer_free_flush(struct softnet_data *sd) static void __busy_poll_stop(struct napi_struct *napi, bool skip_schedule) { if (!skip_schedule) { - gro_normal_list(napi); + gro_normal_list(&napi->gro); __napi_schedule(napi); return; } - if (napi->gro_bitmask) { - /* flush too old packets - * If HZ < 1000, flush all packets. - */ - napi_gro_flush(napi, HZ >= 1000); - } + /* Flush too old packets. If HZ < 1000, flush all packets */ + gro_flush(&napi->gro, HZ >= 1000); + gro_normal_list(&napi->gro); - gro_normal_list(napi); clear_bit(NAPI_STATE_SCHED, &napi->state); } @@ -6489,7 +6484,7 @@ static void __napi_busy_loop(unsigned int napi_id, } work = napi_poll(napi, budget); trace_napi_poll(napi, work, budget); - gro_normal_list(napi); + gro_normal_list(&napi->gro); count: if (work > 0) __NET_ADD_STATS(dev_net(napi->dev), @@ -6662,10 +6657,10 @@ static void init_gro_hash(struct napi_struct *napi) int i; for (i = 0; i < GRO_HASH_BUCKETS; i++) { - INIT_LIST_HEAD(&napi->gro_hash[i].list); - napi->gro_hash[i].count = 0; + INIT_LIST_HEAD(&napi->gro.hash[i].list); + napi->gro.hash[i].count = 0; } - napi->gro_bitmask = 0; + napi->gro.bitmask = 0; } int dev_set_threaded(struct net_device *dev, bool threaded) @@ -6811,8 +6806,8 @@ void netif_napi_add_weight(struct net_device *dev, struct napi_struct *napi, napi->timer.function = napi_watchdog; init_gro_hash(napi); napi->skb = NULL; - INIT_LIST_HEAD(&napi->rx_list); - napi->rx_count = 0; + INIT_LIST_HEAD(&napi->gro.rx_list); + napi->gro.rx_count = 0; napi->poll = poll; if (weight > NAPI_POLL_WEIGHT) netdev_err_once(dev, "%s() called with weight %d\n", __func__, @@ -6906,9 +6901,9 @@ static void flush_gro_hash(struct napi_struct *napi) for (i = 0; i < GRO_HASH_BUCKETS; i++) { struct sk_buff *skb, *n; - list_for_each_entry_safe(skb, n, &napi->gro_hash[i].list, list) + list_for_each_entry_safe(skb, n, &napi->gro.hash[i].list, list) kfree_skb(skb); - napi->gro_hash[i].count = 0; + napi->gro.hash[i].count = 0; } } @@ -6927,7 +6922,7 @@ void __netif_napi_del(struct napi_struct *napi) napi_free_frags(napi); flush_gro_hash(napi); - napi->gro_bitmask = 0; + napi->gro.bitmask = 0; if (napi->thread) { kthread_stop(napi->thread); @@ -6986,14 +6981,9 @@ static int __napi_poll(struct napi_struct *n, bool *repoll) return work; } - if (n->gro_bitmask) { - /* flush too old packets - * If HZ < 1000, flush all packets. - */ - napi_gro_flush(n, HZ >= 1000); - } - - gro_normal_list(n); + /* Flush too old packets. If HZ < 1000, flush all packets */ + gro_flush(&n->gro, HZ >= 1000); + gro_normal_list(&n->gro); /* Some drivers may have called napi_schedule * prior to exhausting their budget. @@ -11933,7 +11923,7 @@ static struct hlist_head * __net_init netdev_create_hash(void) static int __net_init netdev_init(struct net *net) { BUILD_BUG_ON(GRO_HASH_BUCKETS > - 8 * sizeof_field(struct napi_struct, gro_bitmask)); + BITS_PER_BYTE * sizeof_field(struct gro_node, bitmask)); INIT_LIST_HEAD(&net->dev_base_head); diff --git a/net/core/gro.c b/net/core/gro.c index d1f44084e978..77ec10d9cd43 100644 --- a/net/core/gro.c +++ b/net/core/gro.c @@ -253,8 +253,7 @@ int skb_gro_receive_list(struct sk_buff *p, struct sk_buff *skb) return 0; } - -static void napi_gro_complete(struct napi_struct *napi, struct sk_buff *skb) +static void gro_complete(struct gro_node *gro, struct sk_buff *skb) { struct list_head *head = &net_hotdata.offload_base; struct packet_offload *ptype; @@ -287,43 +286,43 @@ static void napi_gro_complete(struct napi_struct *napi, struct sk_buff *skb) } out: - gro_normal_one(napi, skb, NAPI_GRO_CB(skb)->count); + gro_normal_one(gro, skb, NAPI_GRO_CB(skb)->count); } -static void __napi_gro_flush_chain(struct napi_struct *napi, u32 index, - bool flush_old) +static void __gro_flush_chain(struct gro_node *gro, u32 index, bool flush_old) { - struct list_head *head = &napi->gro_hash[index].list; + struct list_head *head = &gro->hash[index].list; struct sk_buff *skb, *p; list_for_each_entry_safe_reverse(skb, p, head, list) { if (flush_old && NAPI_GRO_CB(skb)->age == jiffies) return; skb_list_del_init(skb); - napi_gro_complete(napi, skb); - napi->gro_hash[index].count--; + gro_complete(gro, skb); + gro->hash[index].count--; } - if (!napi->gro_hash[index].count) - __clear_bit(index, &napi->gro_bitmask); + if (!gro->hash[index].count) + __clear_bit(index, &gro->bitmask); } -/* napi->gro_hash[].list contains packets ordered by age. +/* + * gro->hash[].list contains packets ordered by age. * youngest packets at the head of it. * Complete skbs in reverse order to reduce latencies. */ -void napi_gro_flush(struct napi_struct *napi, bool flush_old) +void __gro_flush(struct gro_node *gro, bool flush_old) { - unsigned long bitmask = napi->gro_bitmask; + unsigned long bitmask = gro->bitmask; unsigned int i, base = ~0U; while ((i = ffs(bitmask)) != 0) { bitmask >>= i; base += i; - __napi_gro_flush_chain(napi, base, flush_old); + __gro_flush_chain(gro, base, flush_old); } } -EXPORT_SYMBOL(napi_gro_flush); +EXPORT_SYMBOL(__gro_flush); static unsigned long gro_list_prepare_tc_ext(const struct sk_buff *skb, const struct sk_buff *p, @@ -442,7 +441,7 @@ static void gro_try_pull_from_frag0(struct sk_buff *skb) gro_pull_from_frag0(skb, grow); } -static void gro_flush_oldest(struct napi_struct *napi, struct list_head *head) +static void gro_flush_oldest(struct gro_node *gro, struct list_head *head) { struct sk_buff *oldest; @@ -458,14 +457,15 @@ static void gro_flush_oldest(struct napi_struct *napi, struct list_head *head) * SKB to the chain. */ skb_list_del_init(oldest); - napi_gro_complete(napi, oldest); + gro_complete(gro, oldest); } -static enum gro_result dev_gro_receive(struct napi_struct *napi, struct sk_buff *skb) +static enum gro_result dev_gro_receive(struct gro_node *gro, + struct sk_buff *skb) { u32 bucket = skb_get_hash_raw(skb) & (GRO_HASH_BUCKETS - 1); - struct gro_list *gro_list = &napi->gro_hash[bucket]; struct list_head *head = &net_hotdata.offload_base; + struct gro_list *gro_list = &gro->hash[bucket]; struct packet_offload *ptype; __be16 type = skb->protocol; struct sk_buff *pp = NULL; @@ -529,7 +529,7 @@ static enum gro_result dev_gro_receive(struct napi_struct *napi, struct sk_buff if (pp) { skb_list_del_init(pp); - napi_gro_complete(napi, pp); + gro_complete(gro, pp); gro_list->count--; } @@ -540,7 +540,7 @@ static enum gro_result dev_gro_receive(struct napi_struct *napi, struct sk_buff goto normal; if (unlikely(gro_list->count >= MAX_GRO_SKBS)) - gro_flush_oldest(napi, &gro_list->list); + gro_flush_oldest(gro, &gro_list->list); else gro_list->count++; @@ -554,10 +554,10 @@ static enum gro_result dev_gro_receive(struct napi_struct *napi, struct sk_buff ret = GRO_HELD; ok: if (gro_list->count) { - if (!test_bit(bucket, &napi->gro_bitmask)) - __set_bit(bucket, &napi->gro_bitmask); - } else if (test_bit(bucket, &napi->gro_bitmask)) { - __clear_bit(bucket, &napi->gro_bitmask); + if (!test_bit(bucket, &gro->bitmask)) + __set_bit(bucket, &gro->bitmask); + } else if (test_bit(bucket, &gro->bitmask)) { + __clear_bit(bucket, &gro->bitmask); } return ret; @@ -596,13 +596,12 @@ struct packet_offload *gro_find_complete_by_type(__be16 type) } EXPORT_SYMBOL(gro_find_complete_by_type); -static gro_result_t napi_skb_finish(struct napi_struct *napi, - struct sk_buff *skb, - gro_result_t ret) +static gro_result_t gro_skb_finish(struct gro_node *gro, struct sk_buff *skb, + gro_result_t ret) { switch (ret) { case GRO_NORMAL: - gro_normal_one(napi, skb, 1); + gro_normal_one(gro, skb, 1); break; case GRO_MERGED_FREE: @@ -623,21 +622,21 @@ static gro_result_t napi_skb_finish(struct napi_struct *napi, return ret; } -gro_result_t napi_gro_receive(struct napi_struct *napi, struct sk_buff *skb) +gro_result_t gro_receive_skb(struct gro_node *gro, struct sk_buff *skb) { gro_result_t ret; - skb_mark_napi_id(skb, napi); + __skb_mark_napi_id(skb, gro->napi_id); trace_napi_gro_receive_entry(skb); skb_gro_reset_offset(skb, 0); - ret = napi_skb_finish(napi, skb, dev_gro_receive(napi, skb)); + ret = gro_skb_finish(gro, skb, dev_gro_receive(gro, skb)); trace_napi_gro_receive_exit(ret); return ret; } -EXPORT_SYMBOL(napi_gro_receive); +EXPORT_SYMBOL(gro_receive_skb); static void napi_reuse_skb(struct napi_struct *napi, struct sk_buff *skb) { @@ -693,7 +692,7 @@ static gro_result_t napi_frags_finish(struct napi_struct *napi, __skb_push(skb, ETH_HLEN); skb->protocol = eth_type_trans(skb, skb->dev); if (ret == GRO_NORMAL) - gro_normal_one(napi, skb, 1); + gro_normal_one(&napi->gro, skb, 1); break; case GRO_MERGED_FREE: @@ -762,7 +761,7 @@ gro_result_t napi_gro_frags(struct napi_struct *napi) trace_napi_gro_frags_entry(skb); - ret = napi_frags_finish(napi, skb, dev_gro_receive(napi, skb)); + ret = napi_frags_finish(napi, skb, dev_gro_receive(&napi->gro, skb)); trace_napi_gro_frags_exit(ret); return ret; From patchwork Wed Jan 15 15:18:55 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13940539 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1EE231448DC; Wed, 15 Jan 2025 15:19:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736954390; cv=none; b=qW2HwlfZOgtVVpGh/AsCTu37gJ5jZYSQ77xHITLXIdIeviM203BVmSs2bEEJyCHKkDMj5THXMT0DkJy97SavpEIWr8Ki82iEN8HOqyC1X1NTJNLBb9KOSAGTtmTkKpg/F/HULekOsyPyz6MXqzpx4LHRYF4pY6fozct4VGGtncI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736954390; c=relaxed/simple; bh=hvY65ELOVizwVM0MFYKXWcYas5ed242LEd5vH8fd0BU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=cHKfmkUqd/blz5WQfZ2wBc3viSw5rxgU9zr3PrLbIXZ7EKgjJHnT7364KH7MmKKscXwkVkYZRfMLTSPFxuQK1d+JQ31ulDTK2qhbAaIHOwifKtvZ7NOjdvPZeJjZHgwCzu4vyaeF2sWY/rr5ocm7GzSiybLfOJHXSx/jUvWUIqk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=cT7cYTlu; arc=none smtp.client-ip=192.198.163.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="cT7cYTlu" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1736954389; x=1768490389; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=hvY65ELOVizwVM0MFYKXWcYas5ed242LEd5vH8fd0BU=; b=cT7cYTlu9eiTqzQJhxV7GkGJC3Q6cFdZ9IZ7Ttu8WYdhKGLWMrUhFtUi QwRHMhU9oIimyvy7EzF40P09SlO0OoZtKgw7F5B9D6zAgJlSO/Gv+IZLp VMifVLjNo4Oy4rw+TMsIbJ0F5Rw2DatPIUrq8btqjHtX+lx91TsPuDUXq c+8VP5I6zJx4faq9HmTQXJ1ivugMQgS+AndkK9xB9LMYuDkxhruwXhsKH lczIwv0qoLB0n78PkmcIF1pOsFkaZEewnfO2GnP8Qs+wVY8dPuK3AAuJ1 ycpsi75Xg+W3YZLivC33RCWOKiUGhjBLvtKgvSXsIQENTKJt4j4MyrJ+9 w==; X-CSE-ConnectionGUID: RSCpbbjDQg2/Pix8z9iDhA== X-CSE-MsgGUID: NDwFD8kWSmiZ1N68SZ9Kzg== X-IronPort-AV: E=McAfee;i="6700,10204,11316"; a="37451780" X-IronPort-AV: E=Sophos;i="6.13,206,1732608000"; d="scan'208";a="37451780" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jan 2025 07:19:49 -0800 X-CSE-ConnectionGUID: slY4WwgzSDWpibHXw/DvKw== X-CSE-MsgGUID: yjnBfjSjSKWdXlEYRN6luw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,206,1732608000"; d="scan'208";a="105116653" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa007.fm.intel.com with ESMTP; 15 Jan 2025 07:19:45 -0800 From: Alexander Lobakin To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Lorenzo Bianconi , Daniel Xu , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , John Fastabend , =?utf-8?q?Toke_H=C3=B8iland-J?= =?utf-8?q?=C3=B8rgensen?= , Jesper Dangaard Brouer , Martin KaFai Lau , netdev@vger.kernel.org, bpf@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v3 2/8] net: gro: expose GRO init/cleanup to use outside of NAPI Date: Wed, 15 Jan 2025 16:18:55 +0100 Message-ID: <20250115151901.2063909-3-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.48.0 In-Reply-To: <20250115151901.2063909-1-aleksander.lobakin@intel.com> References: <20250115151901.2063909-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Make GRO init and cleanup functions global to be able to use GRO without a NAPI instance. Taking into account already global gro_flush(), it's now fully usable standalone. New functions are not exported, since they're not supposed to be used outside of the kernel core code. Signed-off-by: Alexander Lobakin Tested-by: Daniel Xu --- include/net/gro.h | 3 +++ net/core/dev.c | 33 +++------------------------------ net/core/gro.c | 32 ++++++++++++++++++++++++++++++++ 3 files changed, 38 insertions(+), 30 deletions(-) diff --git a/include/net/gro.h b/include/net/gro.h index 7aad366452d6..343d5afe7c9e 100644 --- a/include/net/gro.h +++ b/include/net/gro.h @@ -543,6 +543,9 @@ static inline void gro_normal_one(struct gro_node *gro, struct sk_buff *skb, gro_normal_list(gro); } +void gro_init(struct gro_node *gro); +void gro_cleanup(struct gro_node *gro); + /* This function is the alternative of 'inet_iif' and 'inet_sdif' * functions in case we can not rely on fields of IPCB. * diff --git a/net/core/dev.c b/net/core/dev.c index afa5e6e7eb3f..ed1b00b16916 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -6652,17 +6652,6 @@ static enum hrtimer_restart napi_watchdog(struct hrtimer *timer) return HRTIMER_NORESTART; } -static void init_gro_hash(struct napi_struct *napi) -{ - int i; - - for (i = 0; i < GRO_HASH_BUCKETS; i++) { - INIT_LIST_HEAD(&napi->gro.hash[i].list); - napi->gro.hash[i].count = 0; - } - napi->gro.bitmask = 0; -} - int dev_set_threaded(struct net_device *dev, bool threaded) { struct napi_struct *napi; @@ -6804,10 +6793,8 @@ void netif_napi_add_weight(struct net_device *dev, struct napi_struct *napi, INIT_HLIST_NODE(&napi->napi_hash_node); hrtimer_init(&napi->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_PINNED); napi->timer.function = napi_watchdog; - init_gro_hash(napi); + gro_init(&napi->gro); napi->skb = NULL; - INIT_LIST_HEAD(&napi->gro.rx_list); - napi->gro.rx_count = 0; napi->poll = poll; if (weight > NAPI_POLL_WEIGHT) netdev_err_once(dev, "%s() called with weight %d\n", __func__, @@ -6894,19 +6881,6 @@ void napi_enable(struct napi_struct *n) } EXPORT_SYMBOL(napi_enable); -static void flush_gro_hash(struct napi_struct *napi) -{ - int i; - - for (i = 0; i < GRO_HASH_BUCKETS; i++) { - struct sk_buff *skb, *n; - - list_for_each_entry_safe(skb, n, &napi->gro.hash[i].list, list) - kfree_skb(skb); - napi->gro.hash[i].count = 0; - } -} - /* Must be called in process context */ void __netif_napi_del(struct napi_struct *napi) { @@ -6921,8 +6895,7 @@ void __netif_napi_del(struct napi_struct *napi) list_del_rcu(&napi->dev_list); napi_free_frags(napi); - flush_gro_hash(napi); - napi->gro.bitmask = 0; + gro_cleanup(&napi->gro); if (napi->thread) { kthread_stop(napi->thread); @@ -12287,7 +12260,7 @@ static int __init net_dev_init(void) INIT_CSD(&sd->defer_csd, trigger_rx_softirq, sd); spin_lock_init(&sd->defer_lock); - init_gro_hash(&sd->backlog); + gro_init(&sd->backlog.gro); sd->backlog.poll = process_backlog; sd->backlog.weight = weight_p; INIT_LIST_HEAD(&sd->backlog.poll_list); diff --git a/net/core/gro.c b/net/core/gro.c index 77ec10d9cd43..d8e929ad7538 100644 --- a/net/core/gro.c +++ b/net/core/gro.c @@ -793,3 +793,35 @@ __sum16 __skb_gro_checksum_complete(struct sk_buff *skb) return sum; } EXPORT_SYMBOL(__skb_gro_checksum_complete); + +void gro_init(struct gro_node *gro) +{ + for (u32 i = 0; i < GRO_HASH_BUCKETS; i++) { + INIT_LIST_HEAD(&gro->hash[i].list); + gro->hash[i].count = 0; + } + + gro->bitmask = 0; + + INIT_LIST_HEAD(&gro->rx_list); + gro->rx_count = 0; +} + +void gro_cleanup(struct gro_node *gro) +{ + struct sk_buff *skb, *n; + + for (u32 i = 0; i < GRO_HASH_BUCKETS; i++) { + list_for_each_entry_safe(skb, n, &gro->hash[i].list, list) + kfree_skb(skb); + + gro->hash[i].count = 0; + } + + gro->bitmask = 0; + + list_for_each_entry_safe(skb, n, &gro->rx_list, list) + kfree_skb(skb); + + gro->rx_count = 0; +} From patchwork Wed Jan 15 15:18:56 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13940540 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 12DE115C158; Wed, 15 Jan 2025 15:19:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736954394; cv=none; b=A1dCEGf6EkrQvIend1bsLAwSU6Saw2Ly25z/v8FG4HWH8hL7EBYri4Y/8OEggxREEmY4AqpXVCJIYLCGtNSfB7zY87LA52Y20yrKTjHnaMH7ehFf9EeZXClkPBPIwAMI1ttebdSyW1RSOEgFT3UjXPf10cb4vBM9xLe+JvexHGs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736954394; c=relaxed/simple; bh=RXSJs+SYv1WQ5pctjH95Q+52NrXQPcteHToCrm0qK1A=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=sMZ16qXxwHDbFgVtw6Ng6RFa9xuwM2jcsXCS+YStXPqmcvXajC9kvQl7dg5vYBTz5riww2OQGkZX6RrWeBhS1WjwhaSxCnZYqDBute4VX3/6bfnYKh4oce1AZzmow5ZPwLAQy9fCpI6M/rRYJPUt071bgf/yklSLnW2H0E+DzLQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=NykRgtke; arc=none smtp.client-ip=192.198.163.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="NykRgtke" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1736954393; x=1768490393; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=RXSJs+SYv1WQ5pctjH95Q+52NrXQPcteHToCrm0qK1A=; b=NykRgtke77V+IkhVnLlcKIwImPPePEvHd2EK4EN15e6LRlcbynIcl2oj V4roG8aNjjABevR/EqirapdVGeWS+dw8+QTaIFJSFAQHZ7x4gmhKFlXq7 Dgj9PTkPjE4SHl8oqTw97MebT1XDrKuO9eHVg+r7rSMfexnY2q4UON2qp PxhcrM51zWIdWc6qClJcq7umTyGmbDs5n5cjH5cb3RK+TENQMomkQrEH4 a3kJGrNN+p67cuWaSFAeja+76p49tgtWXg9hkyYBt+anFwbRMTehsQ4m2 odr2irEWvjcTx+bKmANUvEhfbtcAPT4FnloLmsu7iT97YXnhjS1G3GvqY A==; X-CSE-ConnectionGUID: St9YWfFrTlKGWOPnPpTWCA== X-CSE-MsgGUID: RcbiiFyaTQy7URSPH3GZVg== X-IronPort-AV: E=McAfee;i="6700,10204,11316"; a="37451792" X-IronPort-AV: E=Sophos;i="6.13,206,1732608000"; d="scan'208";a="37451792" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jan 2025 07:19:53 -0800 X-CSE-ConnectionGUID: nJNbrTt4RrqbI4w2zg3dTQ== X-CSE-MsgGUID: B06YXzbdRLu51nQQvYBGKA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,206,1732608000"; d="scan'208";a="105116660" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa007.fm.intel.com with ESMTP; 15 Jan 2025 07:19:49 -0800 From: Alexander Lobakin To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Lorenzo Bianconi , Daniel Xu , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , John Fastabend , =?utf-8?q?Toke_H=C3=B8iland-J?= =?utf-8?q?=C3=B8rgensen?= , Jesper Dangaard Brouer , Martin KaFai Lau , netdev@vger.kernel.org, bpf@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v3 3/8] bpf: cpumap: switch to GRO from netif_receive_skb_list() Date: Wed, 15 Jan 2025 16:18:56 +0100 Message-ID: <20250115151901.2063909-4-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.48.0 In-Reply-To: <20250115151901.2063909-1-aleksander.lobakin@intel.com> References: <20250115151901.2063909-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org cpumap has its own BH context based on kthread. It has a sane batch size of 8 frames per one cycle. GRO can be used here on its own. Adjust cpumap calls to the upper stack to use GRO API instead of netif_receive_skb_list() which processes skbs by batches, but doesn't involve GRO layer at all. In plenty of tests, GRO performs better than listed receiving even given that it has to calculate full frame checksums on the CPU. As GRO passes the skbs to the upper stack in the batches of @gro_normal_batch, i.e. 8 by default, and skb->dev points to the device where the frame comes from, it is enough to disable GRO netdev feature on it to completely restore the original behaviour: untouched frames will be being bulked and passed to the upper stack by 8, as it was with netif_receive_skb_list(). Signed-off-by: Alexander Lobakin Tested-by: Daniel Xu Acked-by: Jesper Dangaard Brouer --- kernel/bpf/cpumap.c | 45 ++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 42 insertions(+), 3 deletions(-) diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c index 774accbd4a22..10d062dddb6f 100644 --- a/kernel/bpf/cpumap.c +++ b/kernel/bpf/cpumap.c @@ -33,8 +33,8 @@ #include #include -#include /* netif_receive_skb_list */ -#include /* eth_type_trans */ +#include +#include /* General idea: XDP packets getting XDP redirected to another CPU, * will maximum be stored/queued for one driver ->poll() call. It is @@ -68,6 +68,7 @@ struct bpf_cpu_map_entry { struct bpf_cpumap_val value; struct bpf_prog *prog; + struct gro_node gro; struct completion kthread_running; struct rcu_work free_work; @@ -261,10 +262,36 @@ static int cpu_map_bpf_prog_run(struct bpf_cpu_map_entry *rcpu, void **frames, return nframes; } +static void cpu_map_gro_receive(struct bpf_cpu_map_entry *rcpu, + struct list_head *list) +{ + struct sk_buff *skb, *tmp; + + list_for_each_entry_safe(skb, tmp, list, list) { + skb_list_del_init(skb); + gro_receive_skb(&rcpu->gro, skb); + } +} + +static void cpu_map_gro_flush(struct bpf_cpu_map_entry *rcpu, bool empty) +{ + /* + * If the ring is not empty, there'll be a new iteration soon, and we + * only need to do a full flush if a tick is long (> 1 ms). + * If the ring is empty, to not hold GRO packets in the stack for too + * long, do a full flush. + * This is equivalent to how NAPI decides whether to perform a full + * flush. + */ + gro_flush(&rcpu->gro, !empty && HZ >= 1000); + gro_normal_list(&rcpu->gro); +} + static int cpu_map_kthread_run(void *data) { struct bpf_cpu_map_entry *rcpu = data; unsigned long last_qs = jiffies; + u32 packets = 0; complete(&rcpu->kthread_running); set_current_state(TASK_INTERRUPTIBLE); @@ -282,6 +309,7 @@ static int cpu_map_kthread_run(void *data) void *frames[CPUMAP_BATCH]; void *skbs[CPUMAP_BATCH]; LIST_HEAD(list); + bool empty; /* Release CPU reschedule checks */ if (__ptr_ring_empty(rcpu->queue)) { @@ -361,7 +389,15 @@ static int cpu_map_kthread_run(void *data) trace_xdp_cpumap_kthread(rcpu->map_id, n, kmem_alloc_drops, sched, &stats); - netif_receive_skb_list(&list); + cpu_map_gro_receive(rcpu, &list); + + /* Flush either every 64 packets or in case of empty ring */ + empty = __ptr_ring_empty(rcpu->queue); + if (packets += n >= NAPI_POLL_WEIGHT || empty) { + cpu_map_gro_flush(rcpu, empty); + packets = 0; + } + local_bh_enable(); /* resched point, may call do_softirq() */ } __set_current_state(TASK_RUNNING); @@ -430,6 +466,7 @@ __cpu_map_entry_alloc(struct bpf_map *map, struct bpf_cpumap_val *value, rcpu->cpu = cpu; rcpu->map_id = map->id; rcpu->value.qsize = value->qsize; + gro_init(&rcpu->gro); if (fd > 0 && __cpu_map_load_bpf_program(rcpu, map, fd)) goto free_ptr_ring; @@ -458,6 +495,7 @@ __cpu_map_entry_alloc(struct bpf_map *map, struct bpf_cpumap_val *value, if (rcpu->prog) bpf_prog_put(rcpu->prog); free_ptr_ring: + gro_cleanup(&rcpu->gro); ptr_ring_cleanup(rcpu->queue, NULL); free_queue: kfree(rcpu->queue); @@ -487,6 +525,7 @@ static void __cpu_map_entry_free(struct work_struct *work) if (rcpu->prog) bpf_prog_put(rcpu->prog); + gro_cleanup(&rcpu->gro); /* The queue should be empty at this point */ __cpu_map_ring_cleanup(rcpu->queue); ptr_ring_cleanup(rcpu->queue, NULL); From patchwork Wed Jan 15 15:18:57 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13940541 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F063118A6B2; Wed, 15 Jan 2025 15:19:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736954398; cv=none; b=MgXZn2hqfOU/hj90mCIipBCochCT9JNtE/I8vwEjWnUWpOoCwl2UEZTIxgnuXC13A3lOw4IRtDrPExMsMwyyzn78x1KWi9ISKIn86ub7q88OtpWv6hzoqB+pG/Bs8R67hCJilaZBO/wOtJnWZmuCxiXSs9wldfVfqq5XSRGGgoM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736954398; c=relaxed/simple; bh=+ja1zWQhSNlwp6SrSbZPWtZsGgzZ5hOoajqle/IVSFI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=cgco5vyrGie80Y3cCAvSUJc7C7fHGL5yzCjYW5ZNP43z0Cg4Yg1OEECnrYJdw5kzbNSOjw+JTqPP+jgsXs+flT44nM3HIEMXaGRiVnKgj0+uf8/1Ccq/TOD6LBZc3xPYgMHPqKGXuz3T3fI/e6V9JRKrI78TN1MCuQf0PMGw2N0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=ecHW6R0U; arc=none smtp.client-ip=192.198.163.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="ecHW6R0U" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1736954397; x=1768490397; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+ja1zWQhSNlwp6SrSbZPWtZsGgzZ5hOoajqle/IVSFI=; b=ecHW6R0U+fTf3fYnEsJ6OPpe8N6IYLP0uCwbcrVnN/gZYg4uEr7UoSLr y7ULFUDvDS6imyahnL93v6StyA9PqSA6sCRaY375W1tR+bP3WGPa2J5GY c97sQRCIbH/ZM77NLTLy4zCy4gzIkuGUlKBPX32rHyeaSnM+X094U7BOx 5V0nHkli+2QLKlVko0rhb7CjcyW97ShdOCM1oIvCasr8LZvfEhiP+8g70 hfC9FtYxkWtBdB1NVddKQovonBM/duUKiLM4KoEs3cHGnWQH1mKOln/Aw 3aHa5//jT3aNiHqDx+wsb13g6UFTq2ftTb8xSSYmiD1rc4vyGhB8irQIf A==; X-CSE-ConnectionGUID: wxJm7htNQNy1QDhEQLbUlg== X-CSE-MsgGUID: 13B7Gp+qTnSxAM2vwy6TDA== X-IronPort-AV: E=McAfee;i="6700,10204,11316"; a="37451809" X-IronPort-AV: E=Sophos;i="6.13,206,1732608000"; d="scan'208";a="37451809" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jan 2025 07:19:57 -0800 X-CSE-ConnectionGUID: Ois2AerYTQiw264kVI0qOQ== X-CSE-MsgGUID: VGHbF7/2R5e3l9YL9Kgr0A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,206,1732608000"; d="scan'208";a="105116666" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa007.fm.intel.com with ESMTP; 15 Jan 2025 07:19:53 -0800 From: Alexander Lobakin To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Lorenzo Bianconi , Daniel Xu , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , John Fastabend , =?utf-8?q?Toke_H=C3=B8iland-J?= =?utf-8?q?=C3=B8rgensen?= , Jesper Dangaard Brouer , Martin KaFai Lau , netdev@vger.kernel.org, bpf@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v3 4/8] bpf: cpumap: reuse skb array instead of a linked list to chain skbs Date: Wed, 15 Jan 2025 16:18:57 +0100 Message-ID: <20250115151901.2063909-5-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.48.0 In-Reply-To: <20250115151901.2063909-1-aleksander.lobakin@intel.com> References: <20250115151901.2063909-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org cpumap still uses linked lists to store a list of skbs to pass to the stack. Now that we don't use listified Rx in favor of napi_gro_receive(), linked list is now an unneeded overhead. Inside the polling loop, we already have an array of skbs. Let's reuse it for skbs passed to cpumap (generic XDP) and keep there in case of XDP_PASS when a program is installed to the map itself. Don't list regular xdp_frames after converting them to skbs as well; store them in the mentioned array (but *before* generic skbs as the latters have lower priority) and call gro_receive_skb() for each array element after they're done. Signed-off-by: Alexander Lobakin Tested-by: Daniel Xu --- kernel/bpf/cpumap.c | 119 +++++++++++++++++++++++--------------------- 1 file changed, 61 insertions(+), 58 deletions(-) diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c index 10d062dddb6f..4fae029c4490 100644 --- a/kernel/bpf/cpumap.c +++ b/kernel/bpf/cpumap.c @@ -134,22 +134,23 @@ static void __cpu_map_ring_cleanup(struct ptr_ring *ring) } } -static void cpu_map_bpf_prog_run_skb(struct bpf_cpu_map_entry *rcpu, - struct list_head *listp, - struct xdp_cpumap_stats *stats) +static u32 cpu_map_bpf_prog_run_skb(struct bpf_cpu_map_entry *rcpu, + void **skbs, u32 skb_n, + struct xdp_cpumap_stats *stats) { - struct sk_buff *skb, *tmp; struct xdp_buff xdp; - u32 act; + u32 act, pass = 0; int err; - list_for_each_entry_safe(skb, tmp, listp, list) { + for (u32 i = 0; i < skb_n; i++) { + struct sk_buff *skb = skbs[i]; + act = bpf_prog_run_generic_xdp(skb, &xdp, rcpu->prog); switch (act) { case XDP_PASS: + skbs[pass++] = skb; break; case XDP_REDIRECT: - skb_list_del_init(skb); err = xdp_do_generic_redirect(skb->dev, skb, &xdp, rcpu->prog); if (unlikely(err)) { @@ -158,7 +159,7 @@ static void cpu_map_bpf_prog_run_skb(struct bpf_cpu_map_entry *rcpu, } else { stats->redirect++; } - return; + break; default: bpf_warn_invalid_xdp_action(NULL, rcpu->prog, act); fallthrough; @@ -166,12 +167,15 @@ static void cpu_map_bpf_prog_run_skb(struct bpf_cpu_map_entry *rcpu, trace_xdp_exception(skb->dev, rcpu->prog, act); fallthrough; case XDP_DROP: - skb_list_del_init(skb); - kfree_skb(skb); + napi_consume_skb(skb, true); stats->drop++; - return; + break; } } + + stats->pass += pass; + + return pass; } static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu, @@ -205,7 +209,6 @@ static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu, stats->drop++; } else { frames[nframes++] = xdpf; - stats->pass++; } break; case XDP_REDIRECT: @@ -229,48 +232,44 @@ static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu, } xdp_clear_return_frame_no_direct(); + stats->pass += nframes; return nframes; } #define CPUMAP_BATCH 8 -static int cpu_map_bpf_prog_run(struct bpf_cpu_map_entry *rcpu, void **frames, - int xdp_n, struct xdp_cpumap_stats *stats, - struct list_head *list) +struct cpu_map_ret { + u32 xdp_n; + u32 skb_n; +}; + +static void cpu_map_bpf_prog_run(struct bpf_cpu_map_entry *rcpu, void **frames, + void **skbs, struct cpu_map_ret *ret, + struct xdp_cpumap_stats *stats) { struct bpf_net_context __bpf_net_ctx, *bpf_net_ctx; - int nframes; if (!rcpu->prog) - return xdp_n; + goto out; rcu_read_lock_bh(); bpf_net_ctx = bpf_net_ctx_set(&__bpf_net_ctx); - nframes = cpu_map_bpf_prog_run_xdp(rcpu, frames, xdp_n, stats); + ret->xdp_n = cpu_map_bpf_prog_run_xdp(rcpu, frames, ret->xdp_n, stats); + if (unlikely(ret->skb_n)) + ret->skb_n = cpu_map_bpf_prog_run_skb(rcpu, skbs, ret->skb_n, + stats); if (stats->redirect) xdp_do_flush(); - if (unlikely(!list_empty(list))) - cpu_map_bpf_prog_run_skb(rcpu, list, stats); - bpf_net_ctx_clear(bpf_net_ctx); rcu_read_unlock_bh(); /* resched point, may call do_softirq() */ - return nframes; -} - -static void cpu_map_gro_receive(struct bpf_cpu_map_entry *rcpu, - struct list_head *list) -{ - struct sk_buff *skb, *tmp; - - list_for_each_entry_safe(skb, tmp, list, list) { - skb_list_del_init(skb); - gro_receive_skb(&rcpu->gro, skb); - } +out: + if (unlikely(ret->skb_n) && ret->xdp_n) + memmove(&skbs[ret->xdp_n], skbs, ret->skb_n * sizeof(*skbs)); } static void cpu_map_gro_flush(struct bpf_cpu_map_entry *rcpu, bool empty) @@ -305,10 +304,10 @@ static int cpu_map_kthread_run(void *data) struct xdp_cpumap_stats stats = {}; /* zero stats */ unsigned int kmem_alloc_drops = 0, sched = 0; gfp_t gfp = __GFP_ZERO | GFP_ATOMIC; - int i, n, m, nframes, xdp_n; + struct cpu_map_ret ret = { }; void *frames[CPUMAP_BATCH]; void *skbs[CPUMAP_BATCH]; - LIST_HEAD(list); + u32 i, n, m; bool empty; /* Release CPU reschedule checks */ @@ -334,7 +333,7 @@ static int cpu_map_kthread_run(void *data) */ n = __ptr_ring_consume_batched(rcpu->queue, frames, CPUMAP_BATCH); - for (i = 0, xdp_n = 0; i < n; i++) { + for (i = 0; i < n; i++) { void *f = frames[i]; struct page *page; @@ -342,11 +341,11 @@ static int cpu_map_kthread_run(void *data) struct sk_buff *skb = f; __ptr_clear_bit(0, &skb); - list_add_tail(&skb->list, &list); + skbs[ret.skb_n++] = skb; continue; } - frames[xdp_n++] = f; + frames[ret.xdp_n++] = f; page = virt_to_page(f); /* Bring struct page memory area to curr CPU. Read by @@ -357,39 +356,43 @@ static int cpu_map_kthread_run(void *data) } /* Support running another XDP prog on this CPU */ - nframes = cpu_map_bpf_prog_run(rcpu, frames, xdp_n, &stats, &list); - if (nframes) { - m = kmem_cache_alloc_bulk(net_hotdata.skbuff_cache, - gfp, nframes, skbs); - if (unlikely(m == 0)) { - for (i = 0; i < nframes; i++) - skbs[i] = NULL; /* effect: xdp_return_frame */ - kmem_alloc_drops += nframes; - } + cpu_map_bpf_prog_run(rcpu, frames, skbs, &ret, &stats); + if (!ret.xdp_n) { + local_bh_disable(); + goto stats; + } + + m = kmem_cache_alloc_bulk(net_hotdata.skbuff_cache, gfp, + ret.xdp_n, skbs); + if (unlikely(m < ret.xdp_n)) { + for (i = m; i < ret.xdp_n; i++) + xdp_return_frame(frames[i]); + + if (ret.skb_n) + memmove(&skbs[m], &skbs[ret.xdp_n], + ret.skb_n * sizeof(*skbs)); + + kmem_alloc_drops += ret.xdp_n - m; + ret.xdp_n = m; } local_bh_disable(); - for (i = 0; i < nframes; i++) { + for (i = 0; i < ret.xdp_n; i++) { struct xdp_frame *xdpf = frames[i]; - struct sk_buff *skb = skbs[i]; - - skb = __xdp_build_skb_from_frame(xdpf, skb, - xdpf->dev_rx); - if (!skb) { - xdp_return_frame(xdpf); - continue; - } - list_add_tail(&skb->list, &list); + /* Can fail only when !skb -- already handled above */ + __xdp_build_skb_from_frame(xdpf, skbs[i], xdpf->dev_rx); } +stats: /* Feedback loop via tracepoint. * NB: keep before recv to allow measuring enqueue/dequeue latency. */ trace_xdp_cpumap_kthread(rcpu->map_id, n, kmem_alloc_drops, sched, &stats); - cpu_map_gro_receive(rcpu, &list); + for (i = 0; i < ret.xdp_n + ret.skb_n; i++) + gro_receive_skb(&rcpu->gro, skbs[i]); /* Flush either every 64 packets or in case of empty ring */ empty = __ptr_ring_empty(rcpu->queue); From patchwork Wed Jan 15 15:18:58 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13940542 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 10F8E194A64; Wed, 15 Jan 2025 15:20:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736954402; cv=none; b=i+Gk1wPROQTDBwiGN/OmaTK15qKfEZ0KcSYsxVOwAUYenlbsm5lIovWVW4UfEVsPpL089W1tVdIdgZfH3gdZO5asPaDpksUd0DM4aphG5222VVYZaZUV7rOj5uivS0nxW4Y7r6dFe9h19V0MttkI0ty2zMnYqKfasdiWPza/q5I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736954402; c=relaxed/simple; bh=ccpEnGgtrMRmJlaRq5I0k3F41p3m93uLmAUoR9BiNp8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=oSUWwzNwA0xNuFFNc+15kvBAxIEhATqEs7Eo6ANdoby8JugR+GosAtaZDrse9gS9bqKAEJeFbuuuEgy/MtxWbR161DH/nYq7l2u2wZnIo76811EidcCe2Ls4KteaVZazHyA1390aHoaP+xjRGisV9elH21ygdZTAEpohIpnd2Sw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=SOC+bVG8; arc=none smtp.client-ip=192.198.163.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="SOC+bVG8" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1736954401; x=1768490401; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ccpEnGgtrMRmJlaRq5I0k3F41p3m93uLmAUoR9BiNp8=; b=SOC+bVG8ma5JwFvYUdtqlMcbDzKeUlIBUfT5FfcN2HHQ+HZefKL8I2Ad RZer4JWKARro9YmgQJeQMMC3RVBqRnzx3keyrnXEDo9wuTOqeNLQ9qFQC bb5ncTUTLvRkjHsIEuJB7Hb0TPOc2BoNHOwD/9iblELijOa8GDl0YhvJM dmXW0n4QJchXJzaUDTTOZ5dKF4l0G9xT7VEu4wseM0/dlbj8edxWV2r8d wjRdh+5hrq2jpyX/YxOegdXPnuhRXLs3RjhYuX3GexBv5RtVc6bWk78NY DFVSB2gTmTCw+O8h2Nme3YUV6L4b2xA/ppHu307H7L6DnlySoztdSMz3m A==; X-CSE-ConnectionGUID: WYRLAmPQRrOFD6xUqwLA5w== X-CSE-MsgGUID: b95ZM9MjRHuYKYzBS4E7vg== X-IronPort-AV: E=McAfee;i="6700,10204,11316"; a="37451822" X-IronPort-AV: E=Sophos;i="6.13,206,1732608000"; d="scan'208";a="37451822" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jan 2025 07:20:01 -0800 X-CSE-ConnectionGUID: +PCEBH91TL2aeNaKof0b5A== X-CSE-MsgGUID: PtlmznioRumC2lLs1Yv0Bw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,206,1732608000"; d="scan'208";a="105116673" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa007.fm.intel.com with ESMTP; 15 Jan 2025 07:19:57 -0800 From: Alexander Lobakin To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Lorenzo Bianconi , Daniel Xu , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , John Fastabend , =?utf-8?q?Toke_H=C3=B8iland-J?= =?utf-8?q?=C3=B8rgensen?= , Jesper Dangaard Brouer , Martin KaFai Lau , netdev@vger.kernel.org, bpf@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v3 5/8] net: skbuff: introduce napi_skb_cache_get_bulk() Date: Wed, 15 Jan 2025 16:18:58 +0100 Message-ID: <20250115151901.2063909-6-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.48.0 In-Reply-To: <20250115151901.2063909-1-aleksander.lobakin@intel.com> References: <20250115151901.2063909-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Add a function to get an array of skbs from the NAPI percpu cache. It's supposed to be a drop-in replacement for kmem_cache_alloc_bulk(skbuff_head_cache, GFP_ATOMIC) and xdp_alloc_skb_bulk(GFP_ATOMIC). The difference (apart from the requirement to call it only from the BH) is that it tries to use as many NAPI cache entries for skbs as possible, and allocate new ones only if needed. The logic is as follows: * there is enough skbs in the cache: decache them and return to the caller; * not enough: try refilling the cache first. If there is now enough skbs, return; * still not enough: try allocating skbs directly to the output array with %GFP_ZERO, maybe we'll be able to get some. If there's now enough, return; * still not enough: return as many as we were able to obtain. Most of times, if called from the NAPI polling loop, the first one will be true, sometimes (rarely) the second one. The third and the fourth -- only under heavy memory pressure. It can save significant amounts of CPU cycles if there are GRO cycles and/or Tx completion cycles (anything that descends to napi_skb_cache_put()) happening on this CPU. Signed-off-by: Alexander Lobakin Tested-by: Daniel Xu --- include/linux/skbuff.h | 1 + net/core/skbuff.c | 62 ++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 63 insertions(+) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index bb2b751d274a..1c089c7c14e1 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -1315,6 +1315,7 @@ struct sk_buff *build_skb_around(struct sk_buff *skb, void *data, unsigned int frag_size); void skb_attempt_defer_free(struct sk_buff *skb); +u32 napi_skb_cache_get_bulk(void **skbs, u32 n); struct sk_buff *napi_build_skb(void *data, unsigned int frag_size); struct sk_buff *slab_build_skb(void *data); diff --git a/net/core/skbuff.c b/net/core/skbuff.c index a441613a1e6c..42eb31dcc9ce 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -367,6 +367,68 @@ static struct sk_buff *napi_skb_cache_get(void) return skb; } +/** + * napi_skb_cache_get_bulk - obtain a number of zeroed skb heads from the cache + * @skbs: pointer to an at least @n-sized array to fill with skb pointers + * @n: number of entries to provide + * + * Tries to obtain @n &sk_buff entries from the NAPI percpu cache and writes + * the pointers into the provided array @skbs. If there are less entries + * available, tries to replenish the cache and bulk-allocates the diff from + * the MM layer if needed. + * The heads are being zeroed with either memset() or %__GFP_ZERO, so they are + * ready for {,__}build_skb_around() and don't have any data buffers attached. + * Must be called *only* from the BH context. + * + * Return: number of successfully allocated skbs (@n if no actual allocation + * needed or kmem_cache_alloc_bulk() didn't fail). + */ +u32 napi_skb_cache_get_bulk(void **skbs, u32 n) +{ + struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache); + u32 bulk, total = n; + + local_lock_nested_bh(&napi_alloc_cache.bh_lock); + + if (nc->skb_count >= n) + goto get; + + /* No enough cached skbs. Try refilling the cache first */ + bulk = min(NAPI_SKB_CACHE_SIZE - nc->skb_count, NAPI_SKB_CACHE_BULK); + nc->skb_count += kmem_cache_alloc_bulk(net_hotdata.skbuff_cache, + GFP_ATOMIC | __GFP_NOWARN, bulk, + &nc->skb_cache[nc->skb_count]); + if (likely(nc->skb_count >= n)) + goto get; + + /* Still not enough. Bulk-allocate the missing part directly, zeroed */ + n -= kmem_cache_alloc_bulk(net_hotdata.skbuff_cache, + GFP_ATOMIC | __GFP_ZERO | __GFP_NOWARN, + n - nc->skb_count, &skbs[nc->skb_count]); + if (likely(nc->skb_count >= n)) + goto get; + + /* kmem_cache didn't allocate the number we need, limit the output */ + total -= n - nc->skb_count; + n = nc->skb_count; + +get: + for (u32 base = nc->skb_count - n, i = 0; i < n; i++) { + u32 cache_size = kmem_cache_size(net_hotdata.skbuff_cache); + + skbs[i] = nc->skb_cache[base + i]; + + kasan_mempool_unpoison_object(skbs[i], cache_size); + memset(skbs[i], 0, offsetof(struct sk_buff, tail)); + } + + nc->skb_count -= n; + local_unlock_nested_bh(&napi_alloc_cache.bh_lock); + + return total; +} +EXPORT_SYMBOL_GPL(napi_skb_cache_get_bulk); + static inline void __finalize_skb_around(struct sk_buff *skb, void *data, unsigned int size) { From patchwork Wed Jan 15 15:18:59 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13940543 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EFE351A00D6; Wed, 15 Jan 2025 15:20:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736954406; cv=none; b=f1mAakgx31WmacRjxHOKW/I5bUWrsUTywubeAT+2PJ47br/Vc8+L58sIVF6Ygz+mZhDVfoyhRcrrFTgjD2QvGux6VzLYLWLC+YtmtymBI/0ErXINmfHdkdOcOmhP1kqImwpMl3ULLTRkGOSnmg/kSUjcbvPuxdoAQS2HL4AqU3k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736954406; c=relaxed/simple; bh=VLvjyuGtrJTPn43cvE7ThAgmCuRn4A0Q+5OIeczTnMs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=YYHvHYUqVcYB6HB6tJycHt4Mwq4Dpk2YRiSJQKud3PfCYN3tqmm3F34qt8F4eM85fy+KyuwCd8ETzrT/r7XguRx9D9dqalW72C62u4FzJ06NOBCSegCWdmgL1O0r/vaD6fVe2VFwazBaGqOtkgRdKMwHg3PoOWSTSqsaQq2d1YE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=UZUGmEnS; arc=none smtp.client-ip=192.198.163.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="UZUGmEnS" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1736954405; x=1768490405; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=VLvjyuGtrJTPn43cvE7ThAgmCuRn4A0Q+5OIeczTnMs=; b=UZUGmEnSOl61JDxTFdtYBEUTF3LbYe8sQtxKoMVR6ySi4tuAsLuWbmj1 FfajpU9YoYWuvbQTYKAdTH+pDNx901Xd4q1PonaIGXfAvDGEAF7qVJiE5 9jGgqyqQ3idmEAr7Va+NFPZwhS2jrN5l58G+NinUQSaVZwHMSGejSrNJJ j9XpbMNgPWLAEA/E1wPLncWdGLqnXk07Vm8G3P4Nme6By3zTII9A3hsM9 THz1qADFPcYJh64rHI7XW4yfr1Fr/xAqbcgq6vvxtjPYO16tVeAJHLUHe 9x3GP6/OCK+FmwAassO/sZpB4kz3iqJYlbMC+6RaBk573Y+CAcTb8vaRv Q==; X-CSE-ConnectionGUID: DDXjLivyS5q+I+tKZJ0fOA== X-CSE-MsgGUID: k/8MowwNQsaKdHC6SiCBPw== X-IronPort-AV: E=McAfee;i="6700,10204,11316"; a="37451832" X-IronPort-AV: E=Sophos;i="6.13,206,1732608000"; d="scan'208";a="37451832" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jan 2025 07:20:05 -0800 X-CSE-ConnectionGUID: l/feQ1qNQBuE3Gi4UOLMhw== X-CSE-MsgGUID: jWr/38bhSp+MI6PzEOEBEw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,206,1732608000"; d="scan'208";a="105116693" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa007.fm.intel.com with ESMTP; 15 Jan 2025 07:20:01 -0800 From: Alexander Lobakin To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Lorenzo Bianconi , Daniel Xu , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , John Fastabend , =?utf-8?q?Toke_H=C3=B8iland-J?= =?utf-8?q?=C3=B8rgensen?= , Jesper Dangaard Brouer , Martin KaFai Lau , netdev@vger.kernel.org, bpf@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v3 6/8] bpf: cpumap: switch to napi_skb_cache_get_bulk() Date: Wed, 15 Jan 2025 16:18:59 +0100 Message-ID: <20250115151901.2063909-7-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.48.0 In-Reply-To: <20250115151901.2063909-1-aleksander.lobakin@intel.com> References: <20250115151901.2063909-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Now that cpumap uses GRO, which drops unused skb heads to the NAPI cache, use napi_skb_cache_get_bulk() to try to reuse cached entries and lower MM layer pressure. Always disable the BH before checking and running the cpumap-pinned XDP prog and don't re-enable it in between that and allocating an skb bulk, as we can access the NAPI caches only from the BH context. The better GRO aggregates packets, the less new skbs will be allocated. If an aggregated skb contains 16 frags, this means 15 skbs were returned to the cache, so next 15 skbs will be built without allocating anything. The same trafficgen UDP GRO test now shows: GRO off GRO on threaded GRO 2.3 4 Mpps thr bulk GRO 2.4 4.7 Mpps diff +4 +17 % Comparing to the baseline cpumap: baseline 2.7 N/A Mpps thr bulk GRO 2.4 4.7 Mpps diff -11 +74 % Signed-off-by: Alexander Lobakin Tested-by: Daniel Xu --- kernel/bpf/cpumap.c | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-) diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c index 4fae029c4490..6997b67a0104 100644 --- a/kernel/bpf/cpumap.c +++ b/kernel/bpf/cpumap.c @@ -253,7 +253,7 @@ static void cpu_map_bpf_prog_run(struct bpf_cpu_map_entry *rcpu, void **frames, if (!rcpu->prog) goto out; - rcu_read_lock_bh(); + rcu_read_lock(); bpf_net_ctx = bpf_net_ctx_set(&__bpf_net_ctx); ret->xdp_n = cpu_map_bpf_prog_run_xdp(rcpu, frames, ret->xdp_n, stats); @@ -265,7 +265,7 @@ static void cpu_map_bpf_prog_run(struct bpf_cpu_map_entry *rcpu, void **frames, xdp_do_flush(); bpf_net_ctx_clear(bpf_net_ctx); - rcu_read_unlock_bh(); /* resched point, may call do_softirq() */ + rcu_read_unlock(); out: if (unlikely(ret->skb_n) && ret->xdp_n) @@ -303,7 +303,6 @@ static int cpu_map_kthread_run(void *data) while (!kthread_should_stop() || !__ptr_ring_empty(rcpu->queue)) { struct xdp_cpumap_stats stats = {}; /* zero stats */ unsigned int kmem_alloc_drops = 0, sched = 0; - gfp_t gfp = __GFP_ZERO | GFP_ATOMIC; struct cpu_map_ret ret = { }; void *frames[CPUMAP_BATCH]; void *skbs[CPUMAP_BATCH]; @@ -355,15 +354,14 @@ static int cpu_map_kthread_run(void *data) prefetchw(page); } + local_bh_disable(); + /* Support running another XDP prog on this CPU */ cpu_map_bpf_prog_run(rcpu, frames, skbs, &ret, &stats); - if (!ret.xdp_n) { - local_bh_disable(); + if (!ret.xdp_n) goto stats; - } - m = kmem_cache_alloc_bulk(net_hotdata.skbuff_cache, gfp, - ret.xdp_n, skbs); + m = napi_skb_cache_get_bulk(skbs, ret.xdp_n); if (unlikely(m < ret.xdp_n)) { for (i = m; i < ret.xdp_n; i++) xdp_return_frame(frames[i]); @@ -376,7 +374,6 @@ static int cpu_map_kthread_run(void *data) ret.xdp_n = m; } - local_bh_disable(); for (i = 0; i < ret.xdp_n; i++) { struct xdp_frame *xdpf = frames[i]; From patchwork Wed Jan 15 15:19:00 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13940544 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C4B9A1C1F31; Wed, 15 Jan 2025 15:20:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736954410; cv=none; b=gtv9+svRess4P+1e1afCSivnI9FvrhClz77s6y4CG0/+wF+VPOgdqFRfqcVTEkQuUF59VI2/1hYwxr9k8ZQDF2CHtmAUCb8LGead4xBS8H3Rf0UyY22Qoic5NBMsPkqKc4Xu5Sh6Gsc4INTudfco3jeQZobIMrLVwM3RKo68SQk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736954410; c=relaxed/simple; bh=HOveXO1oeLngwaGY9rAv/03AUdBAjULxQf2pO0H97GM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=mH76pOgR5fnScYK359ygErZsHY2fRv8bZ3LW4iEdi3uRZQtKNc1Njb/iNXasMHYR5c7tkncrIQfgHpW/HCr0W+1/A0MTRF9HWM1y/GgM8hLfkh7uKsTiYj0Z7+GAwgpIgRiOnFh/5W6MjY2A5duNxGRsDDxa990oBmGMemOC7CU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=kkc6gB3J; arc=none smtp.client-ip=192.198.163.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="kkc6gB3J" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1736954409; x=1768490409; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=HOveXO1oeLngwaGY9rAv/03AUdBAjULxQf2pO0H97GM=; b=kkc6gB3JlKzq7zH2Fq4cZdnCfabiVT8eGGr/UTeS++KEtrvyF02gxfsF E+yR1JjjBcQLM0c4s6EDNGNbNdrYPqpjbXPYqha7Y+H2lN67YB73WkhFW nYicw4rf+17yo4MkDqG1j/oQiQqZB2WvtTFo3BFh1A4/tZUL2gqyn5tSG t3t3oENo31fDrvgCCD+/x3VtPGsloVEvGuElyhPY4eUyFp9r/LyUMxxwo gdGN01sEO+NCH1K0YPhhAgKqRFAlLMYA51ymEGf91RW2cjsea03GllLSv s/Mb9HA8t7f6oLqhPYfcdhXOnO4MQLgAUBrtcPUfnANxUfyIf4u5p3/f+ w==; X-CSE-ConnectionGUID: eZX1gGB7SG6y3dHjbwSoIg== X-CSE-MsgGUID: cI5xcv4DSKebJdSva2vg3w== X-IronPort-AV: E=McAfee;i="6700,10204,11316"; a="37451847" X-IronPort-AV: E=Sophos;i="6.13,206,1732608000"; d="scan'208";a="37451847" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jan 2025 07:20:09 -0800 X-CSE-ConnectionGUID: Ff5ZEOHyRF2gWvYA0/d5uQ== X-CSE-MsgGUID: vq72rhWZTVOHRmyYPg6ITA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,206,1732608000"; d="scan'208";a="105116699" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa007.fm.intel.com with ESMTP; 15 Jan 2025 07:20:05 -0800 From: Alexander Lobakin To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Lorenzo Bianconi , Daniel Xu , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , John Fastabend , =?utf-8?q?Toke_H=C3=B8iland-J?= =?utf-8?q?=C3=B8rgensen?= , Jesper Dangaard Brouer , Martin KaFai Lau , netdev@vger.kernel.org, bpf@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v3 7/8] veth: use napi_skb_cache_get_bulk() instead of xdp_alloc_skb_bulk() Date: Wed, 15 Jan 2025 16:19:00 +0100 Message-ID: <20250115151901.2063909-8-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.48.0 In-Reply-To: <20250115151901.2063909-1-aleksander.lobakin@intel.com> References: <20250115151901.2063909-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Now that we can bulk-allocate skbs from the NAPI cache, use that function to do that in veth as well instead of direct allocation from the kmem caches. veth uses NAPI and GRO, so this is both context-safe and beneficial. Signed-off-by: Alexander Lobakin --- drivers/net/veth.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/net/veth.c b/drivers/net/veth.c index 01251868a9c2..7634ee8843bc 100644 --- a/drivers/net/veth.c +++ b/drivers/net/veth.c @@ -684,8 +684,7 @@ static void veth_xdp_rcv_bulk_skb(struct veth_rq *rq, void **frames, void *skbs[VETH_XDP_BATCH]; int i; - if (xdp_alloc_skb_bulk(skbs, n_xdpf, - GFP_ATOMIC | __GFP_ZERO) < 0) { + if (unlikely(!napi_skb_cache_get_bulk(skbs, n_xdpf))) { for (i = 0; i < n_xdpf; i++) xdp_return_frame(frames[i]); stats->rx_drops += n_xdpf; From patchwork Wed Jan 15 15:19:01 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13940545 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7F87C1D5141; Wed, 15 Jan 2025 15:20:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736954414; cv=none; b=PLbzg563tDcXi+968yQflq9LoD7YGG1hH9xOdId4aQ9uM/y6gW/iqi9mrjzAolQ5mdFNAs+ayN2DOOLieti4SOTufEDS6cPrgAVxeaZfNn7P0G8Ltu7gJMO0Mks3eEifjWD30Rey9/AMBPBIGDG9RZUdAjIEB7hDM4h4wqr+vyE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736954414; c=relaxed/simple; bh=gYThw4IhgAdjQNMaa4ehGdflq3N1o9M5hYC6GD2Idn8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=N6TBEGFwoYkz1MW8N0P6X4Ol5kh2MMVYO2EI1ZFqLjqDn3Y2NkdhS4buSLBa/BniNjKRjj+/C7Xv8U88X+aPDC4FNxpOX8OEZW+hWHjTt0iY+OGkBIT4JZOj1pzkBxK3SzQDu3kclE32NZqHIXHINUEgKgUrcUZcqUMD3RfqNo4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=GLReDJWO; arc=none smtp.client-ip=192.198.163.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="GLReDJWO" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1736954413; x=1768490413; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=gYThw4IhgAdjQNMaa4ehGdflq3N1o9M5hYC6GD2Idn8=; b=GLReDJWOeDs/+t4IUGC/++eWzqe18tqFjXfCyLDXX3sDye7fL6MJeOaK Ln3CI997Du6aOmjaawEEyiQTSPpNcduRZC7M5MDo2bAR000lk3ZW+Uf3s tMeLIFhfqSAFqfVhBln/VPnJW4RAc3avYWrKL4+sx0XaNvAQwIVwEi+Ns TMXap7nopFFz4qFCECol8Xot67JqrM9k/ivMPStk6jeo2XycEem/CfFSI VyksFZA6sFchEj3Ycuf/jcgoyz60m0zU4fAeiPuyQFK6rV3qYGEpJzGQe OO3sa8IrU2+r5Qp/lFsZ29/eKYBalEWYjoMIO5H4tLuAewJpvECClaxnM w==; X-CSE-ConnectionGUID: 9Oy/yLEpRs+6yulYT+dydw== X-CSE-MsgGUID: 5s2Rf1XFR1qOKRiAFROc8w== X-IronPort-AV: E=McAfee;i="6700,10204,11316"; a="37451863" X-IronPort-AV: E=Sophos;i="6.13,206,1732608000"; d="scan'208";a="37451863" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jan 2025 07:20:12 -0800 X-CSE-ConnectionGUID: LDYTDqRxTKKxPBFqu6CjZQ== X-CSE-MsgGUID: V9ZyurDcR8O5d0YAEUF4vQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,206,1732608000"; d="scan'208";a="105116708" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa007.fm.intel.com with ESMTP; 15 Jan 2025 07:20:09 -0800 From: Alexander Lobakin To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Lorenzo Bianconi , Daniel Xu , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , John Fastabend , =?utf-8?q?Toke_H=C3=B8iland-J?= =?utf-8?q?=C3=B8rgensen?= , Jesper Dangaard Brouer , Martin KaFai Lau , netdev@vger.kernel.org, bpf@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v3 8/8] xdp: remove xdp_alloc_skb_bulk() Date: Wed, 15 Jan 2025 16:19:01 +0100 Message-ID: <20250115151901.2063909-9-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.48.0 In-Reply-To: <20250115151901.2063909-1-aleksander.lobakin@intel.com> References: <20250115151901.2063909-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org The only user was veth, which now uses napi_skb_cache_get_bulk(). It's now preferred over a direct allocation and is exported as well, so remove this one. Signed-off-by: Alexander Lobakin --- include/net/xdp.h | 1 - net/core/xdp.c | 10 ---------- 2 files changed, 11 deletions(-) diff --git a/include/net/xdp.h b/include/net/xdp.h index 6da0e746cf75..e2f83819405b 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -344,7 +344,6 @@ struct sk_buff *__xdp_build_skb_from_frame(struct xdp_frame *xdpf, struct net_device *dev); struct sk_buff *xdp_build_skb_from_frame(struct xdp_frame *xdpf, struct net_device *dev); -int xdp_alloc_skb_bulk(void **skbs, int n_skb, gfp_t gfp); struct xdp_frame *xdpf_clone(struct xdp_frame *xdpf); static inline diff --git a/net/core/xdp.c b/net/core/xdp.c index 67b53fc7191e..eb8762ff16cb 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -619,16 +619,6 @@ void xdp_warn(const char *msg, const char *func, const int line) }; EXPORT_SYMBOL_GPL(xdp_warn); -int xdp_alloc_skb_bulk(void **skbs, int n_skb, gfp_t gfp) -{ - n_skb = kmem_cache_alloc_bulk(net_hotdata.skbuff_cache, gfp, n_skb, skbs); - if (unlikely(!n_skb)) - return -ENOMEM; - - return 0; -} -EXPORT_SYMBOL_GPL(xdp_alloc_skb_bulk); - /** * xdp_build_skb_from_buff - create an skb from &xdp_buff * @xdp: &xdp_buff to convert to an skb