Message ID | 20250107152940.26530-8-aleksander.lobakin@intel.com (mailing list archive) |
---|---|
State | New |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | bpf: cpumap: enable GRO for XDP_PASS frames | expand |
diff --git a/drivers/net/veth.c b/drivers/net/veth.c index 01251868a9c2..7634ee8843bc 100644 --- a/drivers/net/veth.c +++ b/drivers/net/veth.c @@ -684,8 +684,7 @@ static void veth_xdp_rcv_bulk_skb(struct veth_rq *rq, void **frames, void *skbs[VETH_XDP_BATCH]; int i; - if (xdp_alloc_skb_bulk(skbs, n_xdpf, - GFP_ATOMIC | __GFP_ZERO) < 0) { + if (unlikely(!napi_skb_cache_get_bulk(skbs, n_xdpf))) { for (i = 0; i < n_xdpf; i++) xdp_return_frame(frames[i]); stats->rx_drops += n_xdpf;
Now that we can bulk-allocate skbs from the NAPI cache, use that function to do that in veth as well instead of direct allocation from the kmem caches. veth uses NAPI and GRO, so this is both context-safe and beneficial. Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> --- drivers/net/veth.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)