From patchwork Tue Jan 19 15:50:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 12030413 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E17FAC18E1A for ; Tue, 19 Jan 2021 18:29:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BE49523107 for ; Tue, 19 Jan 2021 18:29:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389479AbhASRip (ORCPT ); Tue, 19 Jan 2021 12:38:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41972 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391607AbhASPvG (ORCPT ); Tue, 19 Jan 2021 10:51:06 -0500 Received: from mail-lj1-x22c.google.com (mail-lj1-x22c.google.com [IPv6:2a00:1450:4864:20::22c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1C9F5C061757; Tue, 19 Jan 2021 07:50:26 -0800 (PST) Received: by mail-lj1-x22c.google.com with SMTP id f17so22405345ljg.12; Tue, 19 Jan 2021 07:50:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=i4w3bZuLfrJkmf9BkM/u7+HKCeUtJU84EklOrB2XMPI=; b=cNYYj/lk2w+aduR/d/g4MAzhWnz2paaJO52v4Wiu3WsMXWt6U4ExNq9fIbmVrPdCM0 +rvKyLLUJ85Iowxs6eCAF9uWveLM3wREKwzysf5YE2RBe7A10Y03cc8VqlHJk4Tfs+bt OzKyYzyErChs3CTAyUcWFQfwJ/2laof6lHwstXltqNKfrYvwPfePwOuCD6RrTjt4hp1E UNTp6yuIgV9cLY/DPNkxjW1JwndRDAkmOOhPSKv7wGLwpMcmXjQEukAI3cuYjb6ogflN NRB3IJCb1MemlhQznYFfm4PkWkO35fnIoa+Qh84YlDVe8HY0YZaUI3zAus5b9gfTptYq bXhg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=i4w3bZuLfrJkmf9BkM/u7+HKCeUtJU84EklOrB2XMPI=; b=DxsXz4+xY0Qtz3kimCAh7lXXTpoiqiye2Zh5NWA6vEuO6svB+KE3Q7GrJUtKETW2Dd lAT0UVqRSL9ctrme67W3R72MKrR0ktaLLzIOGaGJir3cJG2xmvyTf8UjivvpV8C+wbZs ZOxJZtfoLcZ4cG0LfuRb87yVRwtqGZl5GUobg0laqztDkJuEzB0iaRMj9cIT+9A5vUD6 lWZbQ3nVRrw9G3zogG7S/XnuZRVJDVdpSnbXE3gMKoJNQP/8z+LOYSaetPo8fawnqEzW 9MkeOCU3AmgQARJa5hRX//wTeekM2xEHTWh3afNhE+p6A66XkZSvkZyKaQy/FYOzXg2O nWBQ== X-Gm-Message-State: AOAM53173sOZKESW5uNbc+fbyNXQrpoVxnRvMW2kF8Eh9fEyEuUO1S4C rTv9qkNR0vvYDL0GYsu5ghg= X-Google-Smtp-Source: ABdhPJyk8lx2ViAdH252mPECsPOnwq686tC0DIT16EDBJkAL84oqk2GqTMXCoC1ORuP+YhK5KUA29w== X-Received: by 2002:a05:651c:1027:: with SMTP id w7mr2179929ljm.297.1611071424469; Tue, 19 Jan 2021 07:50:24 -0800 (PST) Received: from btopel-mobl.ger.intel.com (c213-102-90-208.bredband.comhem.se. [213.102.90.208]) by smtp.gmail.com with ESMTPSA id h20sm2309249lfc.239.2021.01.19.07.50.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Jan 2021 07:50:23 -0800 (PST) From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: ast@kernel.org, daniel@iogearbox.net, netdev@vger.kernel.org, bpf@vger.kernel.org Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , magnus.karlsson@intel.com, maciej.fijalkowski@intel.com, kuba@kernel.org, jonathan.lemon@gmail.com, maximmi@nvidia.com, davem@davemloft.net, hawk@kernel.org, john.fastabend@gmail.com, ciara.loftus@intel.com, weqaar.a.janjua@intel.com Subject: [PATCH bpf-next v2 1/8] xdp: restructure redirect actions Date: Tue, 19 Jan 2021 16:50:06 +0100 Message-Id: <20210119155013.154808-2-bjorn.topel@gmail.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210119155013.154808-1-bjorn.topel@gmail.com> References: <20210119155013.154808-1-bjorn.topel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Björn Töpel The XDP_REDIRECT implementations for maps and non-maps are fairly similar, but obviously need to take different code paths depending on if the target is using a map or not. Today, the redirect targets for XDP either uses a map, or is based on ifindex. Future commits will introduce yet another redirect target via the a new helper, bpf_redirect_xsk(). To pave the way for that, we introduce an explicit redirect type to bpf_redirect_info. This makes the code easier to follow, and makes it easier to add new redirect targets. Further, using an explicit type in bpf_redirect_info has a slight positive performance impact by avoiding a pointer indirection for the map type lookup, and instead use the hot cacheline for bpf_redirect_info. The bpf_redirect_info flags member is not used by XDP, and not read/written any more. The map member is only written to when required/used, and not unconditionally. Reviewed-by: Maciej Fijalkowski Signed-off-by: Björn Töpel --- include/linux/filter.h | 9 ++ include/trace/events/xdp.h | 46 +++++++---- net/core/filter.c | 164 ++++++++++++++++++------------------- 3 files changed, 117 insertions(+), 102 deletions(-) diff --git a/include/linux/filter.h b/include/linux/filter.h index 7fdce5407214..5fc336a271c2 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -637,10 +637,19 @@ struct bpf_redirect_info { u32 tgt_index; void *tgt_value; struct bpf_map *map; + u32 tgt_type; u32 kern_flags; struct bpf_nh_params nh; }; +enum xdp_redirect_type { + XDP_REDIR_UNSET, + XDP_REDIR_DEV_IFINDEX, + XDP_REDIR_DEV_MAP, + XDP_REDIR_CPU_MAP, + XDP_REDIR_XSK_MAP, +}; + DECLARE_PER_CPU(struct bpf_redirect_info, bpf_redirect_info); /* flags for bpf_redirect_info kern_flags */ diff --git a/include/trace/events/xdp.h b/include/trace/events/xdp.h index 76a97176ab81..0e17b9a74f28 100644 --- a/include/trace/events/xdp.h +++ b/include/trace/events/xdp.h @@ -96,9 +96,10 @@ DECLARE_EVENT_CLASS(xdp_redirect_template, TP_PROTO(const struct net_device *dev, const struct bpf_prog *xdp, const void *tgt, int err, - const struct bpf_map *map, u32 index), + enum xdp_redirect_type type, + const struct bpf_redirect_info *ri), - TP_ARGS(dev, xdp, tgt, err, map, index), + TP_ARGS(dev, xdp, tgt, err, type, ri), TP_STRUCT__entry( __field(int, prog_id) @@ -111,12 +112,19 @@ DECLARE_EVENT_CLASS(xdp_redirect_template, ), TP_fast_assign( + struct bpf_map *map = NULL; + u32 index = ri->tgt_index; + + if (type == XDP_REDIR_DEV_MAP || type == XDP_REDIR_CPU_MAP || + type == XDP_REDIR_XSK_MAP) + map = READ_ONCE(ri->map); + __entry->prog_id = xdp->aux->id; __entry->act = XDP_REDIRECT; __entry->ifindex = dev->ifindex; __entry->err = err; __entry->to_ifindex = map ? devmap_ifindex(tgt, map) : - index; + (u32)(long)tgt; __entry->map_id = map ? map->id : 0; __entry->map_index = map ? index : 0; ), @@ -133,45 +141,49 @@ DEFINE_EVENT(xdp_redirect_template, xdp_redirect, TP_PROTO(const struct net_device *dev, const struct bpf_prog *xdp, const void *tgt, int err, - const struct bpf_map *map, u32 index), - TP_ARGS(dev, xdp, tgt, err, map, index) + enum xdp_redirect_type type, + const struct bpf_redirect_info *ri), + TP_ARGS(dev, xdp, tgt, err, type, ri) ); DEFINE_EVENT(xdp_redirect_template, xdp_redirect_err, TP_PROTO(const struct net_device *dev, const struct bpf_prog *xdp, const void *tgt, int err, - const struct bpf_map *map, u32 index), - TP_ARGS(dev, xdp, tgt, err, map, index) + enum xdp_redirect_type type, + const struct bpf_redirect_info *ri), + TP_ARGS(dev, xdp, tgt, err, type, ri) ); #define _trace_xdp_redirect(dev, xdp, to) \ - trace_xdp_redirect(dev, xdp, NULL, 0, NULL, to) + trace_xdp_redirect(dev, xdp, NULL, 0, XDP_REDIR_DEV_IFINDEX, NULL) #define _trace_xdp_redirect_err(dev, xdp, to, err) \ - trace_xdp_redirect_err(dev, xdp, NULL, err, NULL, to) + trace_xdp_redirect_err(dev, xdp, NULL, err, XDP_REDIR_DEV_IFINDEX, NULL) -#define _trace_xdp_redirect_map(dev, xdp, to, map, index) \ - trace_xdp_redirect(dev, xdp, to, 0, map, index) +#define _trace_xdp_redirect_map(dev, xdp, to, type, ri) \ + trace_xdp_redirect(dev, xdp, to, 0, type, ri) -#define _trace_xdp_redirect_map_err(dev, xdp, to, map, index, err) \ - trace_xdp_redirect_err(dev, xdp, to, err, map, index) +#define _trace_xdp_redirect_map_err(dev, xdp, to, type, ri, err) \ + trace_xdp_redirect_err(dev, xdp, to, err, type, ri) /* not used anymore, but kept around so as not to break old programs */ DEFINE_EVENT(xdp_redirect_template, xdp_redirect_map, TP_PROTO(const struct net_device *dev, const struct bpf_prog *xdp, const void *tgt, int err, - const struct bpf_map *map, u32 index), - TP_ARGS(dev, xdp, tgt, err, map, index) + enum xdp_redirect_type type, + const struct bpf_redirect_info *ri), + TP_ARGS(dev, xdp, tgt, err, type, ri) ); DEFINE_EVENT(xdp_redirect_template, xdp_redirect_map_err, TP_PROTO(const struct net_device *dev, const struct bpf_prog *xdp, const void *tgt, int err, - const struct bpf_map *map, u32 index), - TP_ARGS(dev, xdp, tgt, err, map, index) + enum xdp_redirect_type type, + const struct bpf_redirect_info *ri), + TP_ARGS(dev, xdp, tgt, err, type, ri) ); TRACE_EVENT(xdp_cpumap_kthread, diff --git a/net/core/filter.c b/net/core/filter.c index 9ab94e90d660..5f31e21be531 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -3923,23 +3923,6 @@ static const struct bpf_func_proto bpf_xdp_adjust_meta_proto = { .arg2_type = ARG_ANYTHING, }; -static int __bpf_tx_xdp_map(struct net_device *dev_rx, void *fwd, - struct bpf_map *map, struct xdp_buff *xdp) -{ - switch (map->map_type) { - case BPF_MAP_TYPE_DEVMAP: - case BPF_MAP_TYPE_DEVMAP_HASH: - return dev_map_enqueue(fwd, xdp, dev_rx); - case BPF_MAP_TYPE_CPUMAP: - return cpu_map_enqueue(fwd, xdp, dev_rx); - case BPF_MAP_TYPE_XSKMAP: - return __xsk_map_redirect(fwd, xdp); - default: - return -EBADRQC; - } - return 0; -} - void xdp_do_flush(void) { __dev_flush(); @@ -3948,22 +3931,6 @@ void xdp_do_flush(void) } EXPORT_SYMBOL_GPL(xdp_do_flush); -static inline void *__xdp_map_lookup_elem(struct bpf_map *map, u32 index) -{ - switch (map->map_type) { - case BPF_MAP_TYPE_DEVMAP: - return __dev_map_lookup_elem(map, index); - case BPF_MAP_TYPE_DEVMAP_HASH: - return __dev_map_hash_lookup_elem(map, index); - case BPF_MAP_TYPE_CPUMAP: - return __cpu_map_lookup_elem(map, index); - case BPF_MAP_TYPE_XSKMAP: - return __xsk_map_lookup_elem(map, index); - default: - return NULL; - } -} - void bpf_clear_redirect_map(struct bpf_map *map) { struct bpf_redirect_info *ri; @@ -3985,34 +3952,42 @@ int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp, struct bpf_prog *xdp_prog) { struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); - struct bpf_map *map = READ_ONCE(ri->map); - u32 index = ri->tgt_index; + enum xdp_redirect_type type = ri->tgt_type; void *fwd = ri->tgt_value; int err; - ri->tgt_index = 0; + ri->tgt_type = XDP_REDIR_UNSET; ri->tgt_value = NULL; - WRITE_ONCE(ri->map, NULL); - if (unlikely(!map)) { - fwd = dev_get_by_index_rcu(dev_net(dev), index); + switch (type) { + case XDP_REDIR_DEV_IFINDEX: + fwd = dev_get_by_index_rcu(dev_net(dev), (u32)(long)fwd); if (unlikely(!fwd)) { err = -EINVAL; - goto err; + break; } - err = dev_xdp_enqueue(fwd, xdp, dev); - } else { - err = __bpf_tx_xdp_map(dev, fwd, map, xdp); + break; + case XDP_REDIR_DEV_MAP: + err = dev_map_enqueue(fwd, xdp, dev); + break; + case XDP_REDIR_CPU_MAP: + err = cpu_map_enqueue(fwd, xdp, dev); + break; + case XDP_REDIR_XSK_MAP: + err = __xsk_map_redirect(fwd, xdp); + break; + default: + err = -EBADRQC; } if (unlikely(err)) goto err; - _trace_xdp_redirect_map(dev, xdp_prog, fwd, map, index); + _trace_xdp_redirect_map(dev, xdp_prog, fwd, type, ri); return 0; err: - _trace_xdp_redirect_map_err(dev, xdp_prog, fwd, map, index, err); + _trace_xdp_redirect_map_err(dev, xdp_prog, fwd, type, ri, err); return err; } EXPORT_SYMBOL_GPL(xdp_do_redirect); @@ -4021,41 +3996,40 @@ static int xdp_do_generic_redirect_map(struct net_device *dev, struct sk_buff *skb, struct xdp_buff *xdp, struct bpf_prog *xdp_prog, - struct bpf_map *map) + void *fwd, + enum xdp_redirect_type type) { struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); - u32 index = ri->tgt_index; - void *fwd = ri->tgt_value; - int err = 0; - - ri->tgt_index = 0; - ri->tgt_value = NULL; - WRITE_ONCE(ri->map, NULL); + int err; - if (map->map_type == BPF_MAP_TYPE_DEVMAP || - map->map_type == BPF_MAP_TYPE_DEVMAP_HASH) { + switch (type) { + case XDP_REDIR_DEV_MAP: { struct bpf_dtab_netdev *dst = fwd; err = dev_map_generic_redirect(dst, skb, xdp_prog); if (unlikely(err)) goto err; - } else if (map->map_type == BPF_MAP_TYPE_XSKMAP) { + break; + } + case XDP_REDIR_XSK_MAP: { struct xdp_sock *xs = fwd; err = xsk_generic_rcv(xs, xdp); if (err) goto err; consume_skb(skb); - } else { + break; + } + default: /* TODO: Handle BPF_MAP_TYPE_CPUMAP */ err = -EBADRQC; goto err; } - _trace_xdp_redirect_map(dev, xdp_prog, fwd, map, index); + _trace_xdp_redirect_map(dev, xdp_prog, fwd, type, ri); return 0; err: - _trace_xdp_redirect_map_err(dev, xdp_prog, fwd, map, index, err); + _trace_xdp_redirect_map_err(dev, xdp_prog, fwd, type, ri, err); return err; } @@ -4063,29 +4037,31 @@ int xdp_do_generic_redirect(struct net_device *dev, struct sk_buff *skb, struct xdp_buff *xdp, struct bpf_prog *xdp_prog) { struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); - struct bpf_map *map = READ_ONCE(ri->map); - u32 index = ri->tgt_index; - struct net_device *fwd; + enum xdp_redirect_type type = ri->tgt_type; + void *fwd = ri->tgt_value; int err = 0; - if (map) - return xdp_do_generic_redirect_map(dev, skb, xdp, xdp_prog, - map); - ri->tgt_index = 0; - fwd = dev_get_by_index_rcu(dev_net(dev), index); - if (unlikely(!fwd)) { - err = -EINVAL; - goto err; - } + ri->tgt_type = XDP_REDIR_UNSET; + ri->tgt_value = NULL; - err = xdp_ok_fwd_dev(fwd, skb->len); - if (unlikely(err)) - goto err; + if (type == XDP_REDIR_DEV_IFINDEX) { + fwd = dev_get_by_index_rcu(dev_net(dev), (u32)(long)fwd); + if (unlikely(!fwd)) { + err = -EINVAL; + goto err; + } - skb->dev = fwd; - _trace_xdp_redirect(dev, xdp_prog, index); - generic_xdp_tx(skb, xdp_prog); - return 0; + err = xdp_ok_fwd_dev(fwd, skb->len); + if (unlikely(err)) + goto err; + + skb->dev = fwd; + _trace_xdp_redirect(dev, xdp_prog, index); + generic_xdp_tx(skb, xdp_prog); + return 0; + } + + return xdp_do_generic_redirect_map(dev, skb, xdp, xdp_prog, fwd, type); err: _trace_xdp_redirect_err(dev, xdp_prog, index, err); return err; @@ -4098,10 +4074,9 @@ BPF_CALL_2(bpf_xdp_redirect, u32, ifindex, u64, flags) if (unlikely(flags)) return XDP_ABORTED; - ri->flags = flags; - ri->tgt_index = ifindex; - ri->tgt_value = NULL; - WRITE_ONCE(ri->map, NULL); + ri->tgt_type = XDP_REDIR_DEV_IFINDEX; + ri->tgt_index = 0; + ri->tgt_value = (void *)(long)ifindex; return XDP_REDIRECT; } @@ -4123,18 +4098,37 @@ BPF_CALL_3(bpf_xdp_redirect_map, struct bpf_map *, map, u32, ifindex, if (unlikely(flags > XDP_TX)) return XDP_ABORTED; - ri->tgt_value = __xdp_map_lookup_elem(map, ifindex); + switch (map->map_type) { + case BPF_MAP_TYPE_DEVMAP: + ri->tgt_value = __dev_map_lookup_elem(map, ifindex); + ri->tgt_type = XDP_REDIR_DEV_MAP; + break; + case BPF_MAP_TYPE_DEVMAP_HASH: + ri->tgt_value = __dev_map_hash_lookup_elem(map, ifindex); + ri->tgt_type = XDP_REDIR_DEV_MAP; + break; + case BPF_MAP_TYPE_CPUMAP: + ri->tgt_value = __cpu_map_lookup_elem(map, ifindex); + ri->tgt_type = XDP_REDIR_CPU_MAP; + break; + case BPF_MAP_TYPE_XSKMAP: + ri->tgt_value = __xsk_map_lookup_elem(map, ifindex); + ri->tgt_type = XDP_REDIR_XSK_MAP; + break; + default: + ri->tgt_value = NULL; + } + if (unlikely(!ri->tgt_value)) { /* If the lookup fails we want to clear out the state in the * redirect_info struct completely, so that if an eBPF program * performs multiple lookups, the last one always takes * precedence. */ - WRITE_ONCE(ri->map, NULL); + ri->tgt_type = XDP_REDIR_UNSET; return flags; } - ri->flags = flags; ri->tgt_index = ifindex; WRITE_ONCE(ri->map, map); From patchwork Tue Jan 19 15:50:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 12030411 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C4CC8C18E19 for ; Tue, 19 Jan 2021 18:29:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9DA4722AAA for ; Tue, 19 Jan 2021 18:29:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388756AbhASRio (ORCPT ); Tue, 19 Jan 2021 12:38:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41982 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391610AbhASPvI (ORCPT ); Tue, 19 Jan 2021 10:51:08 -0500 Received: from mail-lf1-x136.google.com (mail-lf1-x136.google.com [IPv6:2a00:1450:4864:20::136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C27AAC0613C1; Tue, 19 Jan 2021 07:50:27 -0800 (PST) Received: by mail-lf1-x136.google.com with SMTP id o17so29743047lfg.4; Tue, 19 Jan 2021 07:50:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=RO1GdsGe/CwUlFc4QGSwFYrVwnR9WPHVNLK3C1RF8Gk=; b=Y43ogcA0T7mowGKBBsoPEaxGpomJKy6sPzMxJtlf6Npk+qKyuRcB5PODYduiE+nqv2 MAwyeJvbXt0wxUfukFNe0MMN8frdtwqyrTmYmdeqmGfNA+jY6oo3qdUoy5anrPLvFbIx GXkJzPII4xNWvplxyKgpHuT6K+/1XT/o4m5nzhlXjHP+4VDJ3eXUPvqbe72SGZ/5HLcI 0w8wmLrfDLdXFSRufPHeNmmLtVYzOidqgPzGQLRJiRgA2qPfG9cjEPtMUsoSfwy7Omgv 2jBh1XCfBD8QlTr36J5QgzHyzwEFbhYe/pwCr5rY3802YoD/6WSOYOaqDL0O7s/Nfkee BRZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=RO1GdsGe/CwUlFc4QGSwFYrVwnR9WPHVNLK3C1RF8Gk=; b=Kbv3Fc1yW4obHhyTfcff4Amd8/2emVJxHEPGjd2XOIDhsigfA5BBcmDSlDbhpajo1g eRtd0IO9IM+SC4PlOUh8ryXzDtK+b0bJUKpVIVy4oa8Yh9R7II1ktZIBcy3Xcspr5FGJ /NBZuU3YyaJvcsoag0b/jrUyYS+vjZrfiU8VJWxuXLyp9sSsfZhyrGeCFiKVp2O/PStg QUA4yuJjJDP3i/mmuulHpDSD7SeHA/BkGBvosIrqIq3n9qoFwTYnz6ieK50JZNVluxUj GDWitAhMeKtu8jQj9/6gSg3nB41Zwk6gZ1GMhymq5UyGnvK3Y6rDcE14EmtQUvNjhNAK c3hw== X-Gm-Message-State: AOAM533bNHyV9ArPLcI95nNWSklabbBc8KYXNYJL05lQrpKfsXWT/YN5 nqvaNJ+Y/+nvwLVkfDagE2Y= X-Google-Smtp-Source: ABdhPJygJVw7HZDZZ/Z0AOuzXAxR9QPmYmtGIp/0H/ljPVkSWFLds3klVcvRh/sIPMI5uxQ+yrASjA== X-Received: by 2002:a05:6512:706:: with SMTP id b6mr2318934lfs.115.1611071426257; Tue, 19 Jan 2021 07:50:26 -0800 (PST) Received: from btopel-mobl.ger.intel.com (c213-102-90-208.bredband.comhem.se. [213.102.90.208]) by smtp.gmail.com with ESMTPSA id h20sm2309249lfc.239.2021.01.19.07.50.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Jan 2021 07:50:25 -0800 (PST) From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: ast@kernel.org, daniel@iogearbox.net, netdev@vger.kernel.org, bpf@vger.kernel.org Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , magnus.karlsson@intel.com, maciej.fijalkowski@intel.com, kuba@kernel.org, jonathan.lemon@gmail.com, maximmi@nvidia.com, davem@davemloft.net, hawk@kernel.org, john.fastabend@gmail.com, ciara.loftus@intel.com, weqaar.a.janjua@intel.com Subject: [PATCH bpf-next v2 2/8] xsk: remove explicit_free parameter from __xsk_rcv() Date: Tue, 19 Jan 2021 16:50:07 +0100 Message-Id: <20210119155013.154808-3-bjorn.topel@gmail.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210119155013.154808-1-bjorn.topel@gmail.com> References: <20210119155013.154808-1-bjorn.topel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Björn Töpel The explicit_free parameter of the __xsk_rcv() function was used to mark whether the call was via the generic XDP or the native XDP path. Instead of clutter the code with if-statements and "true/false" parameters which are hard to understand, simply move the explicit free to the __xsk_map_redirect() which is always called from the native XDP path. Reviewed-by: Maciej Fijalkowski Signed-off-by: Björn Töpel --- net/xdp/xsk.c | 47 +++++++++++++++++++++++++++++++---------------- 1 file changed, 31 insertions(+), 16 deletions(-) diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c index 8037b04a9edd..5820de65060b 100644 --- a/net/xdp/xsk.c +++ b/net/xdp/xsk.c @@ -184,12 +184,13 @@ static void xsk_copy_xdp(struct xdp_buff *to, struct xdp_buff *from, u32 len) memcpy(to_buf, from_buf, len + metalen); } -static int __xsk_rcv(struct xdp_sock *xs, struct xdp_buff *xdp, u32 len, - bool explicit_free) +static int __xsk_rcv(struct xdp_sock *xs, struct xdp_buff *xdp) { struct xdp_buff *xsk_xdp; int err; + u32 len; + len = xdp->data_end - xdp->data; if (len > xsk_pool_get_rx_frame_size(xs->pool)) { xs->rx_dropped++; return -ENOSPC; @@ -207,8 +208,6 @@ static int __xsk_rcv(struct xdp_sock *xs, struct xdp_buff *xdp, u32 len, xsk_buff_free(xsk_xdp); return err; } - if (explicit_free) - xdp_return_buff(xdp); return 0; } @@ -230,11 +229,8 @@ static bool xsk_is_bound(struct xdp_sock *xs) return false; } -static int xsk_rcv(struct xdp_sock *xs, struct xdp_buff *xdp, - bool explicit_free) +static int xsk_rcv_check(struct xdp_sock *xs, struct xdp_buff *xdp) { - u32 len; - if (!xsk_is_bound(xs)) return -EINVAL; @@ -242,11 +238,7 @@ static int xsk_rcv(struct xdp_sock *xs, struct xdp_buff *xdp, return -EINVAL; sk_mark_napi_id_once_xdp(&xs->sk, xdp); - len = xdp->data_end - xdp->data; - - return xdp->rxq->mem.type == MEM_TYPE_XSK_BUFF_POOL ? - __xsk_rcv_zc(xs, xdp, len) : - __xsk_rcv(xs, xdp, len, explicit_free); + return 0; } static void xsk_flush(struct xdp_sock *xs) @@ -261,18 +253,41 @@ int xsk_generic_rcv(struct xdp_sock *xs, struct xdp_buff *xdp) int err; spin_lock_bh(&xs->rx_lock); - err = xsk_rcv(xs, xdp, false); - xsk_flush(xs); + err = xsk_rcv_check(xs, xdp); + if (!err) { + err = __xsk_rcv(xs, xdp); + xsk_flush(xs); + } spin_unlock_bh(&xs->rx_lock); return err; } +static int xsk_rcv(struct xdp_sock *xs, struct xdp_buff *xdp) +{ + int err; + u32 len; + + err = xsk_rcv_check(xs, xdp); + if (err) + return err; + + if (xdp->rxq->mem.type == MEM_TYPE_XSK_BUFF_POOL) { + len = xdp->data_end - xdp->data; + return __xsk_rcv_zc(xs, xdp, len); + } + + err = __xsk_rcv(xs, xdp); + if (!err) + xdp_return_buff(xdp); + return err; +} + int __xsk_map_redirect(struct xdp_sock *xs, struct xdp_buff *xdp) { struct list_head *flush_list = this_cpu_ptr(&xskmap_flush_list); int err; - err = xsk_rcv(xs, xdp, true); + err = xsk_rcv(xs, xdp); if (err) return err; From patchwork Tue Jan 19 15:50:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 12030407 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39EA9C169C3 for ; Tue, 19 Jan 2021 18:29:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 03BCD23109 for ; Tue, 19 Jan 2021 18:29:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387973AbhASRhW (ORCPT ); Tue, 19 Jan 2021 12:37:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42030 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391618AbhASPvV (ORCPT ); Tue, 19 Jan 2021 10:51:21 -0500 Received: from mail-lj1-x22a.google.com (mail-lj1-x22a.google.com [IPv6:2a00:1450:4864:20::22a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0927BC0613CF; Tue, 19 Jan 2021 07:50:29 -0800 (PST) Received: by mail-lj1-x22a.google.com with SMTP id m10so22459048lji.1; Tue, 19 Jan 2021 07:50:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=DIoaswv7B1iicUnLwGWig+2UbyTxVZWzlqhoeUIxenk=; b=X9APBLhnDo/dgMmqTEdiP+aYCn+1rHjJUX5dvfgvPD+XWvVQ5o/n38cIHn8/+1JB4P 1r51+L2s7dgWw1e6l38q+bd9gkSNrZvEr1/Zr5JHX6DE2nh4g51tihPCeDB+ybPcLMK+ lOTKPrQ3NcbLo1Q3fTMFB/RYEnfxeha1/wqH6vCQXwe7LJuFykVpSpCTkuRgraqm/EUB e2b/8tNmChx3DTfGaOYDpsJxPMkvQ77A0wD/zhRsFjvrZrLDrTK1hw+0YWLrgYc4uEir 17pI8hpXvvIvHKI7fO72yn9M46kBT7snjQ4F8JlOQrWlrWu1LrYCYp+UHn3tfK74FZxa wIiQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=DIoaswv7B1iicUnLwGWig+2UbyTxVZWzlqhoeUIxenk=; b=OpVnw7i2LSd18I5PXUMZfdzZUnYed3VpVOOYukU0hpI3oVVqxPvguGYIe2OCYXe6G4 j0gbwV579FKatHi/iXw0d8ni0P+VUco8Q+r0v8D5A0rmEZc41GuLryJehw0ZdvirctYs QPA08Gg6VNHjmAGzfQf5w7mt/d2h8KH3pcxzSCyt0Vag5loFdSESvhT8CyuyEXmRoR2I AitDC9T2LphSKx2AF01K7twHkyuLvtD4QZJ77GUGD33Swy0/Hom0N3XZdd4Y2xIg2iKF Vgs43fXvGuGCLp7lN38M5I6RW+AfbOGOzNrSYOdl4ir2TCd+CqBb/ZFtEpBNk68Nwe95 ZfRA== X-Gm-Message-State: AOAM531TkIAn8d4XiyS0yge7gWuFEMFwRouSNtR7zeYCq0q6cC0nCkME e2t7B4OeJ7MgZwBY2oYbJ5A= X-Google-Smtp-Source: ABdhPJwuuswBjb2PVCk8Xr8GfAwenZNarsOIN+yxxQNxYn/bL0Cn3+08cZPkvhPI4UqfbD9lU0R0Zg== X-Received: by 2002:a2e:8013:: with SMTP id j19mr2130561ljg.434.1611071427619; Tue, 19 Jan 2021 07:50:27 -0800 (PST) Received: from btopel-mobl.ger.intel.com (c213-102-90-208.bredband.comhem.se. [213.102.90.208]) by smtp.gmail.com with ESMTPSA id h20sm2309249lfc.239.2021.01.19.07.50.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Jan 2021 07:50:26 -0800 (PST) From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: ast@kernel.org, daniel@iogearbox.net, netdev@vger.kernel.org, bpf@vger.kernel.org Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , magnus.karlsson@intel.com, maciej.fijalkowski@intel.com, kuba@kernel.org, jonathan.lemon@gmail.com, maximmi@nvidia.com, davem@davemloft.net, hawk@kernel.org, john.fastabend@gmail.com, ciara.loftus@intel.com, weqaar.a.janjua@intel.com Subject: [PATCH bpf-next v2 3/8] xsk: fold xp_assign_dev and __xp_assign_dev Date: Tue, 19 Jan 2021 16:50:08 +0100 Message-Id: <20210119155013.154808-4-bjorn.topel@gmail.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210119155013.154808-1-bjorn.topel@gmail.com> References: <20210119155013.154808-1-bjorn.topel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Björn Töpel Fold xp_assign_dev and __xp_assign_dev. The former directly calls the latter. Reviewed-by: Maciej Fijalkowski Signed-off-by: Björn Töpel --- net/xdp/xsk_buff_pool.c | 12 +++--------- 1 file changed, 3 insertions(+), 9 deletions(-) diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c index 20598eea658c..8de01aaac4a0 100644 --- a/net/xdp/xsk_buff_pool.c +++ b/net/xdp/xsk_buff_pool.c @@ -119,8 +119,8 @@ static void xp_disable_drv_zc(struct xsk_buff_pool *pool) } } -static int __xp_assign_dev(struct xsk_buff_pool *pool, - struct net_device *netdev, u16 queue_id, u16 flags) +int xp_assign_dev(struct xsk_buff_pool *pool, + struct net_device *netdev, u16 queue_id, u16 flags) { bool force_zc, force_copy; struct netdev_bpf bpf; @@ -191,12 +191,6 @@ static int __xp_assign_dev(struct xsk_buff_pool *pool, return err; } -int xp_assign_dev(struct xsk_buff_pool *pool, struct net_device *dev, - u16 queue_id, u16 flags) -{ - return __xp_assign_dev(pool, dev, queue_id, flags); -} - int xp_assign_dev_shared(struct xsk_buff_pool *pool, struct xdp_umem *umem, struct net_device *dev, u16 queue_id) { @@ -210,7 +204,7 @@ int xp_assign_dev_shared(struct xsk_buff_pool *pool, struct xdp_umem *umem, if (pool->uses_need_wakeup) flags |= XDP_USE_NEED_WAKEUP; - return __xp_assign_dev(pool, dev, queue_id, flags); + return xp_assign_dev(pool, dev, queue_id, flags); } void xp_clear_dev(struct xsk_buff_pool *pool) From patchwork Tue Jan 19 15:50:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 12030409 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A148C41519 for ; Tue, 19 Jan 2021 18:29:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2C4B222AAA for ; Tue, 19 Jan 2021 18:29:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390549AbhASRhY (ORCPT ); Tue, 19 Jan 2021 12:37:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42032 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391622AbhASPvV (ORCPT ); Tue, 19 Jan 2021 10:51:21 -0500 Received: from mail-lf1-x12f.google.com (mail-lf1-x12f.google.com [IPv6:2a00:1450:4864:20::12f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AB32AC0613D3; Tue, 19 Jan 2021 07:50:30 -0800 (PST) Received: by mail-lf1-x12f.google.com with SMTP id o17so29743252lfg.4; Tue, 19 Jan 2021 07:50:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=hKHQ7CC2Aqb3EGPtFGFCct7hw0qM/zuO1T/NQ4/vMXM=; b=bctHuevH8gUN0BQqPPeUq/oMWjlG2o1mvDQ5atklWjlwjJW8mYEowuGtNIF2T92Fd3 PrLHTQy7BySbioNa+n6ovjDnYXAvYw8A4H1g/yxNwjQ7aG4FKL3+6F1qGUiTjYthH/Wm zx7vRjESusft/59enedRoKErQ8x8iG7+eaJDxLOIGkGDk8mtNebKeiuy7bG9euHX9NgT VmJO3gwmh5E8yHTIM+YwsaPEFIUPG/L4Xpd918FZTwhwvUAOr+9W1YhzVnEzT86sRqvA 2NkhmSr0JuFWTqiXDg7ChLk6P4tN7CyO+hvhiwcCJJOXLUe5haDyE+7OYzk/aSe3U0uF +1aA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=hKHQ7CC2Aqb3EGPtFGFCct7hw0qM/zuO1T/NQ4/vMXM=; b=JKnGZFswL6eH35N3TAvloC0NKKnf6EUj9sj+2wEzmh1qd6HZAhEWShKziMBebRZA2T HjtzulwGhDYQinNdecSvdakNMPHOSB0NFURYNyD9sqwiotbMUeJ6FGsyMbMeK7NtFXwQ shpX1kjIB0qRzYf046eM0ulxsWwJR2yjqb4GcYhlQd7bn4KOWww//uqAOo6NdgyPLaXJ fiqPPJ68+AFUmSHEoMSVCrHvoMToPOrnq+od+VXYjkmrOIClR011c/12hs9IbwzHfIsE LO1aCzytxY2IK8sZ/IzfADbmiXiekG1LG/2vDFXEEUQb5kPSup800RcglkzKi2kwpYKR M/JA== X-Gm-Message-State: AOAM532qVJcQLTD78QLtNgJi7qT3UWs9kIbEVo0T5EQGQh3MW7F96kKe oinRcoBxsWAJcmSnBJlgcMY= X-Google-Smtp-Source: ABdhPJymBc7l6mHQ0lEaBbu7NGORJJTVn2Og+jCm1ENusvSyhID6TuNqwGTXJ3ocaRdXqEBa7adaaw== X-Received: by 2002:a05:6512:34c8:: with SMTP id w8mr2061563lfr.571.1611071429143; Tue, 19 Jan 2021 07:50:29 -0800 (PST) Received: from btopel-mobl.ger.intel.com (c213-102-90-208.bredband.comhem.se. [213.102.90.208]) by smtp.gmail.com with ESMTPSA id h20sm2309249lfc.239.2021.01.19.07.50.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Jan 2021 07:50:28 -0800 (PST) From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: ast@kernel.org, daniel@iogearbox.net, netdev@vger.kernel.org, bpf@vger.kernel.org Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , magnus.karlsson@intel.com, maciej.fijalkowski@intel.com, kuba@kernel.org, jonathan.lemon@gmail.com, maximmi@nvidia.com, davem@davemloft.net, hawk@kernel.org, john.fastabend@gmail.com, ciara.loftus@intel.com, weqaar.a.janjua@intel.com Subject: [PATCH bpf-next v2 4/8] xsk: register XDP sockets at bind(), and add new AF_XDP BPF helper Date: Tue, 19 Jan 2021 16:50:09 +0100 Message-Id: <20210119155013.154808-5-bjorn.topel@gmail.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210119155013.154808-1-bjorn.topel@gmail.com> References: <20210119155013.154808-1-bjorn.topel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Björn Töpel Extend bind() for XDP sockets, so that the bound socket is added to the netdev_rx_queue _rx array in the netdevice. We call this to register an XDP socket. To redirect packets to a registered socket, a new BPF helper is used: bpf_redirect_xsk(). For shared XDP sockets, only the first bound socket is registered. Users that require more advanced setups, continue to the XSKMAP and bpf_redirect_map(). Now, why would one use bpf_redirect_xsk() over the regular bpf_redirect_map() helper? First: Slightly better performance. Second: Convenience. Most user use one socket per queue. This scenario is what registered sockets support. There is no need to create an XSKMAP. This can also reduce complexity from containerized setups, where users might what to use XDP sockets without CAP_SYS_ADMIN capabilities. Reviewed-by: Maciej Fijalkowski Signed-off-by: Björn Töpel Reported-by: kernel test robot Reported-by: kernel test robot --- include/linux/filter.h | 1 + include/linux/netdevice.h | 1 + include/net/xdp_sock.h | 12 +++++ include/net/xsk_buff_pool.h | 2 +- include/uapi/linux/bpf.h | 7 +++ net/core/filter.c | 49 ++++++++++++++++-- net/xdp/xsk.c | 93 ++++++++++++++++++++++++++++------ net/xdp/xsk_buff_pool.c | 4 +- tools/include/uapi/linux/bpf.h | 7 +++ 9 files changed, 153 insertions(+), 23 deletions(-) diff --git a/include/linux/filter.h b/include/linux/filter.h index 5fc336a271c2..3f9efbd08cba 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -648,6 +648,7 @@ enum xdp_redirect_type { XDP_REDIR_DEV_MAP, XDP_REDIR_CPU_MAP, XDP_REDIR_XSK_MAP, + XDP_REDIR_XSK, }; DECLARE_PER_CPU(struct bpf_redirect_info, bpf_redirect_info); diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 5b949076ed23..cb0e215e981c 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -749,6 +749,7 @@ struct netdev_rx_queue { struct xdp_rxq_info xdp_rxq; #ifdef CONFIG_XDP_SOCKETS struct xsk_buff_pool *pool; + struct xdp_sock *xsk; #endif } ____cacheline_aligned_in_smp; diff --git a/include/net/xdp_sock.h b/include/net/xdp_sock.h index cc17bc957548..97b21c483baf 100644 --- a/include/net/xdp_sock.h +++ b/include/net/xdp_sock.h @@ -77,8 +77,10 @@ struct xdp_sock { #ifdef CONFIG_XDP_SOCKETS int xsk_generic_rcv(struct xdp_sock *xs, struct xdp_buff *xdp); +int xsk_generic_redirect(struct net_device *dev, struct xdp_buff *xdp); int __xsk_map_redirect(struct xdp_sock *xs, struct xdp_buff *xdp); void __xsk_map_flush(void); +int xsk_redirect(struct xdp_sock *xs, struct xdp_buff *xdp); static inline struct xdp_sock *__xsk_map_lookup_elem(struct bpf_map *map, u32 key) @@ -100,6 +102,11 @@ static inline int xsk_generic_rcv(struct xdp_sock *xs, struct xdp_buff *xdp) return -ENOTSUPP; } +static inline int xsk_generic_redirect(struct net_device *dev, struct xdp_buff *xdp) +{ + return -EOPNOTSUPP; +} + static inline int __xsk_map_redirect(struct xdp_sock *xs, struct xdp_buff *xdp) { return -EOPNOTSUPP; @@ -115,6 +122,11 @@ static inline struct xdp_sock *__xsk_map_lookup_elem(struct bpf_map *map, return NULL; } +static inline int xsk_redirect(struct net_device *dev, struct xdp_buff *xdp) +{ + return -EOPNOTSUPP; +} + #endif /* CONFIG_XDP_SOCKETS */ #endif /* _LINUX_XDP_SOCK_H */ diff --git a/include/net/xsk_buff_pool.h b/include/net/xsk_buff_pool.h index eaa8386dbc63..bd531d561c60 100644 --- a/include/net/xsk_buff_pool.h +++ b/include/net/xsk_buff_pool.h @@ -84,7 +84,7 @@ struct xsk_buff_pool { /* AF_XDP core. */ struct xsk_buff_pool *xp_create_and_assign_umem(struct xdp_sock *xs, struct xdp_umem *umem); -int xp_assign_dev(struct xsk_buff_pool *pool, struct net_device *dev, +int xp_assign_dev(struct xdp_sock *xs, struct xsk_buff_pool *pool, struct net_device *dev, u16 queue_id, u16 flags); int xp_assign_dev_shared(struct xsk_buff_pool *pool, struct xdp_umem *umem, struct net_device *dev, u16 queue_id); diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index c001766adcbc..bbc7d9a57262 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -3836,6 +3836,12 @@ union bpf_attr { * Return * A pointer to a struct socket on success or NULL if the file is * not a socket. + * + * long bpf_redirect_xsk(struct xdp_buff *xdp_md, u64 action) + * Description + * Redirect to the registered AF_XDP socket. + * Return + * **XDP_REDIRECT** on success, otherwise the action parameter is returned. */ #define __BPF_FUNC_MAPPER(FN) \ FN(unspec), \ @@ -4001,6 +4007,7 @@ union bpf_attr { FN(ktime_get_coarse_ns), \ FN(ima_inode_hash), \ FN(sock_from_file), \ + FN(redirect_xsk), \ /* */ /* integer value in 'imm' field of BPF_CALL instruction selects which helper diff --git a/net/core/filter.c b/net/core/filter.c index 5f31e21be531..b457c83fba70 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -3977,6 +3977,9 @@ int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp, case XDP_REDIR_XSK_MAP: err = __xsk_map_redirect(fwd, xdp); break; + case XDP_REDIR_XSK: + err = xsk_redirect(fwd, xdp); + break; default: err = -EBADRQC; } @@ -4044,25 +4047,33 @@ int xdp_do_generic_redirect(struct net_device *dev, struct sk_buff *skb, ri->tgt_type = XDP_REDIR_UNSET; ri->tgt_value = NULL; - if (type == XDP_REDIR_DEV_IFINDEX) { + switch (type) { + case XDP_REDIR_DEV_IFINDEX: { fwd = dev_get_by_index_rcu(dev_net(dev), (u32)(long)fwd); if (unlikely(!fwd)) { err = -EINVAL; - goto err; + break; } err = xdp_ok_fwd_dev(fwd, skb->len); if (unlikely(err)) - goto err; + break; skb->dev = fwd; _trace_xdp_redirect(dev, xdp_prog, index); generic_xdp_tx(skb, xdp_prog); return 0; } + case XDP_REDIR_XSK: + err = xsk_generic_redirect(dev, xdp); + if (err) + break; + consume_skb(skb); + break; + default: + return xdp_do_generic_redirect_map(dev, skb, xdp, xdp_prog, fwd, type); + } - return xdp_do_generic_redirect_map(dev, skb, xdp, xdp_prog, fwd, type); -err: _trace_xdp_redirect_err(dev, xdp_prog, index, err); return err; } @@ -4144,6 +4155,32 @@ static const struct bpf_func_proto bpf_xdp_redirect_map_proto = { .arg3_type = ARG_ANYTHING, }; +BPF_CALL_2(bpf_xdp_redirect_xsk, struct xdp_buff *, xdp, u64, action) +{ + struct net_device *dev = xdp->rxq->dev; + u32 queue_id = xdp->rxq->queue_index; + struct bpf_redirect_info *ri; + struct xdp_sock *xs; + + xs = READ_ONCE(dev->_rx[queue_id].xsk); + if (!xs) + return action; + + ri = this_cpu_ptr(&bpf_redirect_info); + ri->tgt_type = XDP_REDIR_XSK; + ri->tgt_value = xs; + + return XDP_REDIRECT; +} + +static const struct bpf_func_proto bpf_xdp_redirect_xsk_proto = { + .func = bpf_xdp_redirect_xsk, + .gpl_only = false, + .ret_type = RET_INTEGER, + .arg1_type = ARG_PTR_TO_CTX, + .arg2_type = ARG_ANYTHING, +}; + static unsigned long bpf_skb_copy(void *dst_buff, const void *skb, unsigned long off, unsigned long len) { @@ -7260,6 +7297,8 @@ xdp_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) case BPF_FUNC_tcp_gen_syncookie: return &bpf_tcp_gen_syncookie_proto; #endif + case BPF_FUNC_redirect_xsk: + return &bpf_xdp_redirect_xsk_proto; default: return bpf_sk_base_func_proto(func_id); } diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c index 5820de65060b..79f1492e71e2 100644 --- a/net/xdp/xsk.c +++ b/net/xdp/xsk.c @@ -134,6 +134,28 @@ int xsk_reg_pool_at_qid(struct net_device *dev, struct xsk_buff_pool *pool, return 0; } +static struct xdp_sock *xsk_get_at_qid(struct net_device *dev, u16 queue_id) +{ + return READ_ONCE(dev->_rx[queue_id].xsk); +} + +static void xsk_clear(struct xdp_sock *xs) +{ + struct net_device *dev = xs->dev; + u16 queue_id = xs->queue_id; + + if (queue_id < dev->num_rx_queues) + WRITE_ONCE(dev->_rx[queue_id].xsk, NULL); +} + +static void xsk_reg(struct xdp_sock *xs) +{ + struct net_device *dev = xs->dev; + u16 queue_id = xs->queue_id; + + WRITE_ONCE(dev->_rx[queue_id].xsk, xs); +} + void xp_release(struct xdp_buff_xsk *xskb) { xskb->pool->free_heads[xskb->pool->free_heads_cnt++] = xskb; @@ -184,7 +206,7 @@ static void xsk_copy_xdp(struct xdp_buff *to, struct xdp_buff *from, u32 len) memcpy(to_buf, from_buf, len + metalen); } -static int __xsk_rcv(struct xdp_sock *xs, struct xdp_buff *xdp) +static int ____xsk_rcv(struct xdp_sock *xs, struct xdp_buff *xdp) { struct xdp_buff *xsk_xdp; int err; @@ -211,6 +233,22 @@ static int __xsk_rcv(struct xdp_sock *xs, struct xdp_buff *xdp) return 0; } +static int __xsk_rcv(struct xdp_sock *xs, struct xdp_buff *xdp) +{ + int err; + u32 len; + + if (likely(xdp->rxq->mem.type == MEM_TYPE_XSK_BUFF_POOL)) { + len = xdp->data_end - xdp->data; + return __xsk_rcv_zc(xs, xdp, len); + } + + err = ____xsk_rcv(xs, xdp); + if (!err) + xdp_return_buff(xdp); + return err; +} + static bool xsk_tx_writeable(struct xdp_sock *xs) { if (xskq_cons_present_entries(xs->tx) > xs->tx->nentries / 2) @@ -248,6 +286,39 @@ static void xsk_flush(struct xdp_sock *xs) sock_def_readable(&xs->sk); } +int xsk_redirect(struct xdp_sock *xs, struct xdp_buff *xdp) +{ + struct list_head *flush_list = this_cpu_ptr(&xskmap_flush_list); + int err; + + sk_mark_napi_id_once_xdp(&xs->sk, xdp); + err = __xsk_rcv(xs, xdp); + if (err) + return err; + + if (!xs->flush_node.prev) + list_add(&xs->flush_node, flush_list); + return 0; +} + +int xsk_generic_redirect(struct net_device *dev, struct xdp_buff *xdp) +{ + struct xdp_sock *xs; + u32 queue_id; + int err; + + queue_id = xdp->rxq->queue_index; + xs = xsk_get_at_qid(dev, queue_id); + if (!xs) + return -EINVAL; + + spin_lock_bh(&xs->rx_lock); + err = ____xsk_rcv(xs, xdp); + xsk_flush(xs); + spin_unlock_bh(&xs->rx_lock); + return err; +} + int xsk_generic_rcv(struct xdp_sock *xs, struct xdp_buff *xdp) { int err; @@ -255,7 +326,7 @@ int xsk_generic_rcv(struct xdp_sock *xs, struct xdp_buff *xdp) spin_lock_bh(&xs->rx_lock); err = xsk_rcv_check(xs, xdp); if (!err) { - err = __xsk_rcv(xs, xdp); + err = ____xsk_rcv(xs, xdp); xsk_flush(xs); } spin_unlock_bh(&xs->rx_lock); @@ -264,22 +335,12 @@ int xsk_generic_rcv(struct xdp_sock *xs, struct xdp_buff *xdp) static int xsk_rcv(struct xdp_sock *xs, struct xdp_buff *xdp) { - int err; - u32 len; + int err = xsk_rcv_check(xs, xdp); - err = xsk_rcv_check(xs, xdp); if (err) return err; - if (xdp->rxq->mem.type == MEM_TYPE_XSK_BUFF_POOL) { - len = xdp->data_end - xdp->data; - return __xsk_rcv_zc(xs, xdp, len); - } - - err = __xsk_rcv(xs, xdp); - if (!err) - xdp_return_buff(xdp); - return err; + return __xsk_rcv(xs, xdp); } int __xsk_map_redirect(struct xdp_sock *xs, struct xdp_buff *xdp) @@ -661,6 +722,7 @@ static void xsk_unbind_dev(struct xdp_sock *xs) if (xs->state != XSK_BOUND) return; + xsk_clear(xs); WRITE_ONCE(xs->state, XSK_UNBOUND); /* Wait for driver to stop using the xdp socket. */ @@ -892,7 +954,7 @@ static int xsk_bind(struct socket *sock, struct sockaddr *addr, int addr_len) goto out_unlock; } - err = xp_assign_dev(xs->pool, dev, qid, flags); + err = xp_assign_dev(xs, xs->pool, dev, qid, flags); if (err) { xp_destroy(xs->pool); xs->pool = NULL; @@ -918,6 +980,7 @@ static int xsk_bind(struct socket *sock, struct sockaddr *addr, int addr_len) */ smp_wmb(); WRITE_ONCE(xs->state, XSK_BOUND); + xsk_reg(xs); } out_release: mutex_unlock(&xs->mutex); diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c index 8de01aaac4a0..af02a69d0bf7 100644 --- a/net/xdp/xsk_buff_pool.c +++ b/net/xdp/xsk_buff_pool.c @@ -119,7 +119,7 @@ static void xp_disable_drv_zc(struct xsk_buff_pool *pool) } } -int xp_assign_dev(struct xsk_buff_pool *pool, +int xp_assign_dev(struct xdp_sock *xs, struct xsk_buff_pool *pool, struct net_device *netdev, u16 queue_id, u16 flags) { bool force_zc, force_copy; @@ -204,7 +204,7 @@ int xp_assign_dev_shared(struct xsk_buff_pool *pool, struct xdp_umem *umem, if (pool->uses_need_wakeup) flags |= XDP_USE_NEED_WAKEUP; - return xp_assign_dev(pool, dev, queue_id, flags); + return xp_assign_dev(NULL, pool, dev, queue_id, flags); } void xp_clear_dev(struct xsk_buff_pool *pool) diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index c001766adcbc..bbc7d9a57262 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -3836,6 +3836,12 @@ union bpf_attr { * Return * A pointer to a struct socket on success or NULL if the file is * not a socket. + * + * long bpf_redirect_xsk(struct xdp_buff *xdp_md, u64 action) + * Description + * Redirect to the registered AF_XDP socket. + * Return + * **XDP_REDIRECT** on success, otherwise the action parameter is returned. */ #define __BPF_FUNC_MAPPER(FN) \ FN(unspec), \ @@ -4001,6 +4007,7 @@ union bpf_attr { FN(ktime_get_coarse_ns), \ FN(ima_inode_hash), \ FN(sock_from_file), \ + FN(redirect_xsk), \ /* */ /* integer value in 'imm' field of BPF_CALL instruction selects which helper From patchwork Tue Jan 19 15:50:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 12030227 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4630C433E9 for ; Tue, 19 Jan 2021 15:51:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 75088216FD for ; Tue, 19 Jan 2021 15:51:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387715AbhASPvz (ORCPT ); Tue, 19 Jan 2021 10:51:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42130 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391186AbhASPvs (ORCPT ); Tue, 19 Jan 2021 10:51:48 -0500 Received: from mail-lf1-x12c.google.com (mail-lf1-x12c.google.com [IPv6:2a00:1450:4864:20::12c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 60BADC0613D6; Tue, 19 Jan 2021 07:50:32 -0800 (PST) Received: by mail-lf1-x12c.google.com with SMTP id o10so29672892lfl.13; Tue, 19 Jan 2021 07:50:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=/A8qSxg6mjQyW3rVEfH/72JcIK5m3UMA6EDeCF6zH2o=; b=D2KsAOZmTRMmsi5FdQA03+OPgOay8UQbtUpHuXiQjotzjW8d6HNrbhuKZ7HnhN0Oww IH6B+pBQPMFTBBxBGcR4DRvM6jTHaQMjx1h1I78yRYsLy4E73vMdDS4PoWi/b055LMI2 +V1ofH47e2HgKtBA/Jxyw5A9Ti4zoBaZq+Or9shQPF3gf9CrDaCouv269lVZBIhIEETF t1y0IVhb7nKJUlMs3tFmrpxl19EcRi75bEp0xgIuUcm3A/ZGf7YDv5ZlTjt4sCBzYKGg HsDjUlboi5wZCZAM0Gt1AZCyxjH37SWc0e+ruEN3A1OdLQNgyHl44DaFSax6TgkpxDyI lfNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/A8qSxg6mjQyW3rVEfH/72JcIK5m3UMA6EDeCF6zH2o=; b=iom7a+evM88EHsDWGPSM7Xkn8sdrBaPYQYQVYsOLd8fY6Zw4NEZCLnsG51S2VqmpOh HAQJZz7b2zux2ZLmiUIvUbgrP9ifrZ0Zzvaaqzf8k/O6cfGmrYnNTa2UnWr6oJg84BJ4 G4CEOX25qT6yeeTs0+umusAKHxKmy0wujP8Ccydw6TA5slLr77PuXRVN5IVXlZHD+pBi D5gPrfO34/WGMgwBqW467PRggTCMErSzjaGNnWRog8W+qsEbCYl4bocIHBGAxENJOJbg tvNHGXmkFHzB+L3zNFUMhv0tBVCZmgQLiq9U+gRRM7NVrYOXB8nKm2KlfaftamkWtr1Z w+Tw== X-Gm-Message-State: AOAM531tY3hflt/pz5mzHolBtrZghM8ls0Db4RSwBf0ne6yNRv/iYL7J iLOiCOSLKpzvMzZjQc/Zz8U= X-Google-Smtp-Source: ABdhPJyJnzyDdbUda8wr6P2yDoZaEDEYkQwIQM1twy+vmKVots07bBgi+Qd/7aYgQqQzuwUhO423Wg== X-Received: by 2002:ac2:4a6f:: with SMTP id q15mr2281115lfp.301.1611071430953; Tue, 19 Jan 2021 07:50:30 -0800 (PST) Received: from btopel-mobl.ger.intel.com (c213-102-90-208.bredband.comhem.se. [213.102.90.208]) by smtp.gmail.com with ESMTPSA id h20sm2309249lfc.239.2021.01.19.07.50.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Jan 2021 07:50:30 -0800 (PST) From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: ast@kernel.org, daniel@iogearbox.net, netdev@vger.kernel.org, bpf@vger.kernel.org Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , magnus.karlsson@intel.com, maciej.fijalkowski@intel.com, kuba@kernel.org, jonathan.lemon@gmail.com, maximmi@nvidia.com, davem@davemloft.net, hawk@kernel.org, john.fastabend@gmail.com, ciara.loftus@intel.com, weqaar.a.janjua@intel.com, Marek Majtyka Subject: [PATCH bpf-next v2 5/8] libbpf, xsk: select AF_XDP BPF program based on kernel version Date: Tue, 19 Jan 2021 16:50:10 +0100 Message-Id: <20210119155013.154808-6-bjorn.topel@gmail.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210119155013.154808-1-bjorn.topel@gmail.com> References: <20210119155013.154808-1-bjorn.topel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Björn Töpel Add detection for kernel version, and adapt the BPF program based on kernel support. This way, users will get the best possible performance from the BPF program. Reviewed-by: Maciej Fijalkowski Acked-by: Maciej Fijalkowski Signed-off-by: Björn Töpel Signed-off-by: Marek Majtyka --- tools/lib/bpf/libbpf.c | 2 +- tools/lib/bpf/libbpf_internal.h | 2 ++ tools/lib/bpf/libbpf_probes.c | 16 ------------- tools/lib/bpf/xsk.c | 41 ++++++++++++++++++++++++++++++--- 4 files changed, 41 insertions(+), 20 deletions(-) diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index 2abbc3800568..6a53adf14a9c 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -693,7 +693,7 @@ bpf_object__add_programs(struct bpf_object *obj, Elf_Data *sec_data, return 0; } -static __u32 get_kernel_version(void) +__u32 get_kernel_version(void) { __u32 major, minor, patch; struct utsname info; diff --git a/tools/lib/bpf/libbpf_internal.h b/tools/lib/bpf/libbpf_internal.h index 969d0ac592ba..dafb780e2dd2 100644 --- a/tools/lib/bpf/libbpf_internal.h +++ b/tools/lib/bpf/libbpf_internal.h @@ -349,4 +349,6 @@ struct bpf_core_relo { enum bpf_core_relo_kind kind; }; +__u32 get_kernel_version(void); + #endif /* __LIBBPF_LIBBPF_INTERNAL_H */ diff --git a/tools/lib/bpf/libbpf_probes.c b/tools/lib/bpf/libbpf_probes.c index ecaae2927ab8..aae0231371d0 100644 --- a/tools/lib/bpf/libbpf_probes.c +++ b/tools/lib/bpf/libbpf_probes.c @@ -48,22 +48,6 @@ static int get_vendor_id(int ifindex) return strtol(buf, NULL, 0); } -static int get_kernel_version(void) -{ - int version, subversion, patchlevel; - struct utsname utsn; - - /* Return 0 on failure, and attempt to probe with empty kversion */ - if (uname(&utsn)) - return 0; - - if (sscanf(utsn.release, "%d.%d.%d", - &version, &subversion, &patchlevel) != 3) - return 0; - - return (version << 16) + (subversion << 8) + patchlevel; -} - static void probe_load(enum bpf_prog_type prog_type, const struct bpf_insn *insns, size_t insns_cnt, char *buf, size_t buf_len, __u32 ifindex) diff --git a/tools/lib/bpf/xsk.c b/tools/lib/bpf/xsk.c index e3e41ceeb1bc..c8642c6cb5d6 100644 --- a/tools/lib/bpf/xsk.c +++ b/tools/lib/bpf/xsk.c @@ -23,6 +23,7 @@ #include #include #include +#include #include #include #include @@ -46,6 +47,11 @@ #define PF_XDP AF_XDP #endif +enum xsk_prog { + XSK_PROG_FALLBACK, + XSK_PROG_REDIRECT_FLAGS, +}; + struct xsk_umem { struct xsk_ring_prod *fill_save; struct xsk_ring_cons *comp_save; @@ -351,6 +357,13 @@ int xsk_umem__create_v0_0_2(struct xsk_umem **umem_ptr, void *umem_area, COMPAT_VERSION(xsk_umem__create_v0_0_2, xsk_umem__create, LIBBPF_0.0.2) DEFAULT_VERSION(xsk_umem__create_v0_0_4, xsk_umem__create, LIBBPF_0.0.4) +static enum xsk_prog get_xsk_prog(void) +{ + __u32 kver = get_kernel_version(); + + return kver < KERNEL_VERSION(5, 3, 0) ? XSK_PROG_FALLBACK : XSK_PROG_REDIRECT_FLAGS; +} + static int xsk_load_xdp_prog(struct xsk_socket *xsk) { static const int log_buf_size = 16 * 1024; @@ -358,7 +371,7 @@ static int xsk_load_xdp_prog(struct xsk_socket *xsk) char log_buf[log_buf_size]; int err, prog_fd; - /* This is the C-program: + /* This is the fallback C-program: * SEC("xdp_sock") int xdp_sock_prog(struct xdp_md *ctx) * { * int ret, index = ctx->rx_queue_index; @@ -414,9 +427,31 @@ static int xsk_load_xdp_prog(struct xsk_socket *xsk) /* The jumps are to this instruction */ BPF_EXIT_INSN(), }; - size_t insns_cnt = sizeof(prog) / sizeof(struct bpf_insn); - prog_fd = bpf_load_program(BPF_PROG_TYPE_XDP, prog, insns_cnt, + /* This is the post-5.3 kernel C-program: + * SEC("xdp_sock") int xdp_sock_prog(struct xdp_md *ctx) + * { + * return bpf_redirect_map(&xsks_map, ctx->rx_queue_index, XDP_PASS); + * } + */ + struct bpf_insn prog_redirect_flags[] = { + /* r2 = *(u32 *)(r1 + 16) */ + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 16), + /* r1 = xskmap[] */ + BPF_LD_MAP_FD(BPF_REG_1, ctx->xsks_map_fd), + /* r3 = XDP_PASS */ + BPF_MOV64_IMM(BPF_REG_3, 2), + /* call bpf_redirect_map */ + BPF_EMIT_CALL(BPF_FUNC_redirect_map), + BPF_EXIT_INSN(), + }; + size_t insns_cnt[] = {sizeof(prog) / sizeof(struct bpf_insn), + sizeof(prog_redirect_flags) / sizeof(struct bpf_insn), + }; + struct bpf_insn *progs[] = {prog, prog_redirect_flags}; + enum xsk_prog option = get_xsk_prog(); + + prog_fd = bpf_load_program(BPF_PROG_TYPE_XDP, progs[option], insns_cnt[option], "LGPL-2.1 or BSD-2-Clause", 0, log_buf, log_buf_size); if (prog_fd < 0) { From patchwork Tue Jan 19 15:50:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 12030249 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C03FC433E6 for ; Tue, 19 Jan 2021 16:10:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 20ADD22AAA for ; Tue, 19 Jan 2021 16:10:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391561AbhASQKC (ORCPT ); Tue, 19 Jan 2021 11:10:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42132 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391305AbhASPvs (ORCPT ); Tue, 19 Jan 2021 10:51:48 -0500 Received: from mail-lj1-x235.google.com (mail-lj1-x235.google.com [IPv6:2a00:1450:4864:20::235]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 13645C0613ED; Tue, 19 Jan 2021 07:50:34 -0800 (PST) Received: by mail-lj1-x235.google.com with SMTP id u11so22401359ljo.13; Tue, 19 Jan 2021 07:50:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=rXtdonoT6z2Mk/wuvXU5Af4iSxQJt1+pwiGmRinI7sw=; b=bdICjUBBA4dVZbd0hhiKShUfTjFCf0E6iFRssb7bZOJWDYmHTyYPZbplVF1nfCapL7 z/F+SE2yXMGkJbAvHt8zm2Rq7xdv9dzjT2noTJsg2uG2C4I62HLzOOZHq/kyaB3GhhXS Gjbhq+/0LGDKdf8VGn/puy7D7nRjvFJ1UWbNkq8K7VF1pvm9uUQ8ABpA551H72gRKyIT rlLONxmHCl4FdrDytlEE/oWO22XIVwKJR5tmiJEASIKPqZLFruWAs56Yx4QTGOFMRl64 jw6E+YgzyEi7+7xmHFS+BgUpPNB887KnACm/JM7iaGU5dpymV3H5oIpNK/6ND2PfpTDp Si6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=rXtdonoT6z2Mk/wuvXU5Af4iSxQJt1+pwiGmRinI7sw=; b=U5Ld/+5XXBN1d/ZtrjIex3nWPROvcBaUtcSmf7k0Q2aOwooVRo6JQcPP/nucPgap8j wdynfv3fUnV2gujwpJv5SMQDCGihPpgCGxTUEkEUwnRApdnMFKxp7Bo+VwsVWfStRF7p 9EuiWnTbXL0k2IGKEoG72yYnE0U3jC5maTqu+DW0V1zodvNLAYgSMsLWE0m4G5mUilCJ O1bylCsIXDaFyZvjwd25XHx9XX5IRK7XfW+JfOC+zjHJxVnrjIK82+61t38FRZ88pQa6 KLFpp7OO8JPMnZQibliSbf2NXHCIMfXR7mz9XojxY7+XWX3vApuIYMFS1IXiYQJ6HBok gMog== X-Gm-Message-State: AOAM530zIablpxSEzMp3zDd+BZZLf1H27cJGkCJt5L98lWWsxPEv27aS or44j4ZTDk+zpVjbTY9QIs4= X-Google-Smtp-Source: ABdhPJwP28Hgjquhg9jSWYGkjhjoW+HsaBf0D4OFGJJmwnY+Jg4RlvLAPfR4yftSSam7Euk0YOuINg== X-Received: by 2002:a2e:9d85:: with SMTP id c5mr2279760ljj.80.1611071432635; Tue, 19 Jan 2021 07:50:32 -0800 (PST) Received: from btopel-mobl.ger.intel.com (c213-102-90-208.bredband.comhem.se. [213.102.90.208]) by smtp.gmail.com with ESMTPSA id h20sm2309249lfc.239.2021.01.19.07.50.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Jan 2021 07:50:31 -0800 (PST) From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: ast@kernel.org, daniel@iogearbox.net, netdev@vger.kernel.org, bpf@vger.kernel.org Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , magnus.karlsson@intel.com, maciej.fijalkowski@intel.com, kuba@kernel.org, jonathan.lemon@gmail.com, maximmi@nvidia.com, davem@davemloft.net, hawk@kernel.org, john.fastabend@gmail.com, ciara.loftus@intel.com, weqaar.a.janjua@intel.com Subject: [PATCH bpf-next v2 6/8] libbpf, xsk: select bpf_redirect_xsk(), if supported Date: Tue, 19 Jan 2021 16:50:11 +0100 Message-Id: <20210119155013.154808-7-bjorn.topel@gmail.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210119155013.154808-1-bjorn.topel@gmail.com> References: <20210119155013.154808-1-bjorn.topel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Björn Töpel Select bpf_redirect_xsk() as the default AF_XDP BPF program, if supported. The bpf_redirect_xsk() helper does not require an XSKMAP, so make sure that no map is created/updated when using it. Reviewed-by: Maciej Fijalkowski Signed-off-by: Björn Töpel --- tools/lib/bpf/xsk.c | 46 +++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 42 insertions(+), 4 deletions(-) diff --git a/tools/lib/bpf/xsk.c b/tools/lib/bpf/xsk.c index c8642c6cb5d6..27e36d6d92a6 100644 --- a/tools/lib/bpf/xsk.c +++ b/tools/lib/bpf/xsk.c @@ -47,9 +47,12 @@ #define PF_XDP AF_XDP #endif +#define XSKMAP_NOT_NEEDED -1 + enum xsk_prog { XSK_PROG_FALLBACK, XSK_PROG_REDIRECT_FLAGS, + XSK_PROG_REDIRECT_XSK, }; struct xsk_umem { @@ -361,7 +364,11 @@ static enum xsk_prog get_xsk_prog(void) { __u32 kver = get_kernel_version(); - return kver < KERNEL_VERSION(5, 3, 0) ? XSK_PROG_FALLBACK : XSK_PROG_REDIRECT_FLAGS; + if (kver < KERNEL_VERSION(5, 3, 0)) + return XSK_PROG_FALLBACK; + if (kver < KERNEL_VERSION(5, 12, 0)) + return XSK_PROG_REDIRECT_FLAGS; + return XSK_PROG_REDIRECT_XSK; } static int xsk_load_xdp_prog(struct xsk_socket *xsk) @@ -445,10 +452,25 @@ static int xsk_load_xdp_prog(struct xsk_socket *xsk) BPF_EMIT_CALL(BPF_FUNC_redirect_map), BPF_EXIT_INSN(), }; + + /* This is the post-5.12 kernel C-program: + * SEC("xdp_sock") int xdp_sock_prog(struct xdp_md *ctx) + * { + * return bpf_redirect_xsk(ctx, XDP_PASS); + * } + */ + struct bpf_insn prog_redirect_xsk[] = { + /* r2 = XDP_PASS */ + BPF_MOV64_IMM(BPF_REG_2, 2), + /* call bpf_redirect_xsk */ + BPF_EMIT_CALL(BPF_FUNC_redirect_xsk), + BPF_EXIT_INSN(), + }; size_t insns_cnt[] = {sizeof(prog) / sizeof(struct bpf_insn), sizeof(prog_redirect_flags) / sizeof(struct bpf_insn), + sizeof(prog_redirect_xsk) / sizeof(struct bpf_insn), }; - struct bpf_insn *progs[] = {prog, prog_redirect_flags}; + struct bpf_insn *progs[] = {prog, prog_redirect_flags, prog_redirect_xsk}; enum xsk_prog option = get_xsk_prog(); prog_fd = bpf_load_program(BPF_PROG_TYPE_XDP, progs[option], insns_cnt[option], @@ -508,12 +530,22 @@ static int xsk_get_max_queues(struct xsk_socket *xsk) return ret; } +static bool xskmap_required(void) +{ + return get_xsk_prog() != XSK_PROG_REDIRECT_XSK; +} + static int xsk_create_bpf_maps(struct xsk_socket *xsk) { struct xsk_ctx *ctx = xsk->ctx; int max_queues; int fd; + if (!xskmap_required()) { + ctx->xsks_map_fd = XSKMAP_NOT_NEEDED; + return 0; + } + max_queues = xsk_get_max_queues(xsk); if (max_queues < 0) return max_queues; @@ -532,6 +564,9 @@ static void xsk_delete_bpf_maps(struct xsk_socket *xsk) { struct xsk_ctx *ctx = xsk->ctx; + if (ctx->xsks_map_fd == XSKMAP_NOT_NEEDED) + return; + bpf_map_delete_elem(ctx->xsks_map_fd, &ctx->queue_id); close(ctx->xsks_map_fd); } @@ -563,7 +598,7 @@ static int xsk_lookup_bpf_maps(struct xsk_socket *xsk) if (err) goto out_map_ids; - ctx->xsks_map_fd = -1; + ctx->xsks_map_fd = XSKMAP_NOT_NEEDED; for (i = 0; i < prog_info.nr_map_ids; i++) { fd = bpf_map_get_fd_by_id(map_ids[i]); @@ -585,7 +620,7 @@ static int xsk_lookup_bpf_maps(struct xsk_socket *xsk) } err = 0; - if (ctx->xsks_map_fd == -1) + if (ctx->xsks_map_fd == XSKMAP_NOT_NEEDED && xskmap_required()) err = -ENOENT; out_map_ids: @@ -597,6 +632,9 @@ static int xsk_set_bpf_maps(struct xsk_socket *xsk) { struct xsk_ctx *ctx = xsk->ctx; + if (ctx->xsks_map_fd == XSKMAP_NOT_NEEDED) + return 0; + return bpf_map_update_elem(ctx->xsks_map_fd, &ctx->queue_id, &xsk->fd, 0); } From patchwork Tue Jan 19 15:50:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 12030231 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD95BC433E0 for ; Tue, 19 Jan 2021 15:53:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 97425216FD for ; Tue, 19 Jan 2021 15:53:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391467AbhASPxO (ORCPT ); Tue, 19 Jan 2021 10:53:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42140 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728534AbhASPvu (ORCPT ); Tue, 19 Jan 2021 10:51:50 -0500 Received: from mail-lf1-x12c.google.com (mail-lf1-x12c.google.com [IPv6:2a00:1450:4864:20::12c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E6905C061786; Tue, 19 Jan 2021 07:50:35 -0800 (PST) Received: by mail-lf1-x12c.google.com with SMTP id h205so29680115lfd.5; Tue, 19 Jan 2021 07:50:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=qriqfKmdEj0BSI84JwR+w35FKmOb+96EqBWqr+pLT5g=; b=F+X/bNfTbKFqgnu03hd+KnAa7IPOUyrie1+dpoqMwq77Mnyj16DWOh9PW9PK7JSDCT TABCJADxsvrW6QFn6yKFVwCSAeyZ2QXMovLzm4My2VcYNT2+yyCeMcOEAvYmgbhUCiYW gBzwVWYpqF4/J91E4skbfKwL9k+1NWOF5lve7jQfvLnY18UALs4Kmr5v19hCq9slG2eW YwV0FNtCJeHw3XkMbcuJx+y+rHQjYFhex/+QkdnXNCCbYLQ1tlWDiZvJUJWhfiGk0nD9 c9jw+B7QgsLOjtjEFIhTvWRapn1K+/1XRaDDqjW8CCqgO3jbyRsRhz13APqo3f+xzzlp x9IA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qriqfKmdEj0BSI84JwR+w35FKmOb+96EqBWqr+pLT5g=; b=jKKbhAcicilv3dqkvS2bBF6l7kEUynj9G8MEgLQ/TJ1lxSAcYAV7clU21GeITLUISx d2xDTN6Sn3emGtLFhPTUngkYU3OxRU/S+y1A7Ueh0Qkkstlx3DlB9y8MFE6qlluIjxrM OKEuJLSKd5i9rJFQt10UV6RQ/xTS49i+kI0UvHhZF/QkWmrpnsOzlBEtJbHjSqq/Z855 X4wypMjAnJuZHSOrEV99TgpPOJ4nYBG0RA/dlvFVxEnUMp0AwyQrZIu711foiMHC52WN LsAeDNAqSlfplNGLylL2kznnSx4qRL2jDuQsiXfbEwrfbc/WQG2FAdZRXbYkUsByBilS y24w== X-Gm-Message-State: AOAM531LnLRjo/rdjY5KUphVQ5ABsVwH6H3Ra+ixD6PQq1BY18VWF6LF 2+rRe3KP532IhtM+NRc557g= X-Google-Smtp-Source: ABdhPJy4Ku3E2Wi8qOmlmWk2NplzJZvjVkZsMHsNPybPVkCK3DT4YBnSut4UsLlVJpsLXDQ8hqzK1Q== X-Received: by 2002:ac2:4149:: with SMTP id c9mr2303501lfi.385.1611071434227; Tue, 19 Jan 2021 07:50:34 -0800 (PST) Received: from btopel-mobl.ger.intel.com (c213-102-90-208.bredband.comhem.se. [213.102.90.208]) by smtp.gmail.com with ESMTPSA id h20sm2309249lfc.239.2021.01.19.07.50.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Jan 2021 07:50:33 -0800 (PST) From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: ast@kernel.org, daniel@iogearbox.net, netdev@vger.kernel.org, bpf@vger.kernel.org Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , magnus.karlsson@intel.com, maciej.fijalkowski@intel.com, kuba@kernel.org, jonathan.lemon@gmail.com, maximmi@nvidia.com, davem@davemloft.net, hawk@kernel.org, john.fastabend@gmail.com, ciara.loftus@intel.com, weqaar.a.janjua@intel.com Subject: [PATCH bpf-next v2 7/8] selftest/bpf: add XDP socket tests for bpf_redirect_{xsk, map}() Date: Tue, 19 Jan 2021 16:50:12 +0100 Message-Id: <20210119155013.154808-8-bjorn.topel@gmail.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210119155013.154808-1-bjorn.topel@gmail.com> References: <20210119155013.154808-1-bjorn.topel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Björn Töpel Add support for externally loaded XDP programs to xdpxceiver/test_xsk.sh, so that bpf_redirect_xsk() and bpf_redirect_map() can be exercised. Signed-off-by: Björn Töpel --- .../selftests/bpf/progs/xdpxceiver_ext1.c | 15 ++++ .../selftests/bpf/progs/xdpxceiver_ext2.c | 9 +++ tools/testing/selftests/bpf/test_xsk.sh | 48 ++++++++++++ tools/testing/selftests/bpf/xdpxceiver.c | 77 ++++++++++++++++++- tools/testing/selftests/bpf/xdpxceiver.h | 2 + 5 files changed, 147 insertions(+), 4 deletions(-) create mode 100644 tools/testing/selftests/bpf/progs/xdpxceiver_ext1.c create mode 100644 tools/testing/selftests/bpf/progs/xdpxceiver_ext2.c diff --git a/tools/testing/selftests/bpf/progs/xdpxceiver_ext1.c b/tools/testing/selftests/bpf/progs/xdpxceiver_ext1.c new file mode 100644 index 000000000000..18894040cca6 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/xdpxceiver_ext1.c @@ -0,0 +1,15 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include + +struct { + __uint(type, BPF_MAP_TYPE_XSKMAP); + __uint(max_entries, 32); + __uint(key_size, sizeof(int)); + __uint(value_size, sizeof(int)); +} xsks_map SEC(".maps"); + +SEC("xdp_sock") int xdp_sock_prog(struct xdp_md *ctx) +{ + return bpf_redirect_map(&xsks_map, ctx->rx_queue_index, XDP_DROP); +} diff --git a/tools/testing/selftests/bpf/progs/xdpxceiver_ext2.c b/tools/testing/selftests/bpf/progs/xdpxceiver_ext2.c new file mode 100644 index 000000000000..bd239b958c01 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/xdpxceiver_ext2.c @@ -0,0 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include + +SEC("xdp_sock") int xdp_sock_prog(struct xdp_md *ctx) +{ + return bpf_redirect_xsk(ctx, XDP_DROP); +} + diff --git a/tools/testing/selftests/bpf/test_xsk.sh b/tools/testing/selftests/bpf/test_xsk.sh index 88a7483eaae4..3a3996edf527 100755 --- a/tools/testing/selftests/bpf/test_xsk.sh +++ b/tools/testing/selftests/bpf/test_xsk.sh @@ -245,6 +245,54 @@ retval=$? test_status $retval "${TEST_NAME}" statusList+=($retval) +### TEST 10 +TEST_NAME="SKB EXT BPF_REDIRECT_MAP" + +vethXDPgeneric ${VETH0} ${VETH1} ${NS1} + +params=("-S" "--ext-prog1") +execxdpxceiver params + +retval=$? +test_status $retval "${TEST_NAME}" +statusList+=($retval) + +### TEST 11 +TEST_NAME="DRV EXT BPF_REDIRECT_MAP" + +vethXDPnative ${VETH0} ${VETH1} ${NS1} + +params=("-N" "--ext-prog1") +execxdpxceiver params + +retval=$? +test_status $retval "${TEST_NAME}" +statusList+=($retval) + +### TEST 12 +TEST_NAME="SKB EXT BPF_REDIRECT_XSK" + +vethXDPgeneric ${VETH0} ${VETH1} ${NS1} + +params=("-S" "--ext-prog2") +execxdpxceiver params + +retval=$? +test_status $retval "${TEST_NAME}" +statusList+=($retval) + +### TEST 13 +TEST_NAME="DRV EXT BPF_REDIRECT_XSK" + +vethXDPnative ${VETH0} ${VETH1} ${NS1} + +params=("-N" "--ext-prog2") +execxdpxceiver params + +retval=$? +test_status $retval "${TEST_NAME}" +statusList+=($retval) + ## END TESTS cleanup_exit ${VETH0} ${VETH1} ${NS1} diff --git a/tools/testing/selftests/bpf/xdpxceiver.c b/tools/testing/selftests/bpf/xdpxceiver.c index 1e722ee76b1f..fd0852fdd97d 100644 --- a/tools/testing/selftests/bpf/xdpxceiver.c +++ b/tools/testing/selftests/bpf/xdpxceiver.c @@ -45,7 +45,7 @@ * - Only copy mode is supported because veth does not currently support * zero-copy mode * - * Total tests: 8 + * Total tests: 13 * * Flow: * ----- @@ -93,6 +93,7 @@ typedef __u16 __sum16; #include #include #include +#include #include "xdpxceiver.h" #include "../kselftest.h" @@ -296,6 +297,23 @@ static void xsk_populate_fill_ring(struct xsk_umem_info *umem) xsk_ring_prod__submit(&umem->fq, XSK_RING_PROD__DEFAULT_NUM_DESCS); } +static int update_xskmap(struct bpf_object *obj, struct xsk_socket_info *xsk) +{ + int xskmap, fd, key = opt_queue; + struct bpf_map *map; + + map = bpf_object__find_map_by_name(obj, "xsks_map"); + xskmap = bpf_map__fd(map); + if (xskmap < 0) + return 0; + + fd = xsk_socket__fd(xsk->xsk); + if (bpf_map_update_elem(xskmap, &key, &fd, 0)) + return -1; + + return 0; +} + static int xsk_configure_socket(struct ifobject *ifobject) { struct xsk_socket_config cfg; @@ -310,7 +328,7 @@ static int xsk_configure_socket(struct ifobject *ifobject) ifobject->xsk->umem = ifobject->umem; cfg.rx_size = XSK_RING_CONS__DEFAULT_NUM_DESCS; cfg.tx_size = XSK_RING_PROD__DEFAULT_NUM_DESCS; - cfg.libbpf_flags = 0; + cfg.libbpf_flags = ifobject->obj ? XSK_LIBBPF_FLAGS__INHIBIT_PROG_LOAD : 0; cfg.xdp_flags = opt_xdp_flags; cfg.bind_flags = opt_xdp_bind_flags; @@ -328,6 +346,11 @@ static int xsk_configure_socket(struct ifobject *ifobject) if (ret) return 1; + if (ifobject->obj) { + if (update_xskmap(ifobject->obj, ifobject->xsk)) + exit_with_error(errno); + } + return 0; } @@ -342,6 +365,8 @@ static struct option long_options[] = { {"bidi", optional_argument, 0, 'B'}, {"debug", optional_argument, 0, 'D'}, {"tx-pkt-count", optional_argument, 0, 'C'}, + {"ext-prog1", no_argument, 0, 1}, + {"ext-prog2", no_argument, 0, 1}, {0, 0, 0, 0} }; @@ -441,9 +466,30 @@ static int validate_interfaces(void) return ret; } +static int load_xdp_program(char *argv0, struct bpf_object **obj, int ext_prog) +{ + struct bpf_prog_load_attr prog_load_attr = { + .prog_type = BPF_PROG_TYPE_XDP, + }; + char xdp_filename[256]; + int prog_fd; + + snprintf(xdp_filename, sizeof(xdp_filename), "%s_ext%d.o", argv0, ext_prog); + prog_load_attr.file = xdp_filename; + + if (bpf_prog_load_xattr(&prog_load_attr, obj, &prog_fd)) + return -1; + return prog_fd; +} + +static int attach_xdp_program(int ifindex, int prog_fd) +{ + return bpf_set_link_xdp_fd(ifindex, prog_fd, opt_xdp_flags); +} + static void parse_command_line(int argc, char **argv) { - int option_index, interface_index = 0, c; + int option_index = 0, interface_index = 0, ext_prog = 0, c; opterr = 0; @@ -454,6 +500,9 @@ static void parse_command_line(int argc, char **argv) break; switch (c) { + case 1: + ext_prog = atoi(long_options[option_index].name + strlen("ext-prog")); + break; case 'i': if (interface_index == MAX_INTERFACES) break; @@ -509,6 +558,22 @@ static void parse_command_line(int argc, char **argv) usage(basename(argv[0])); ksft_exit_xfail(); } + + if (ext_prog) { + struct bpf_object *obj; + int prog_fd; + + for (int i = 0; i < MAX_INTERFACES; i++) { + prog_fd = load_xdp_program(argv[0], &obj, ext_prog); + if (prog_fd < 0) { + ksft_test_result_fail("ERROR: could not load ext XDP program\n"); + ksft_exit_xfail(); + } + + ifdict[i]->prog_fd = prog_fd; + ifdict[i]->obj = obj; + } + } } static void kick_tx(struct xsk_socket_info *xsk) @@ -818,6 +883,7 @@ static void *worker_testapp_validate(void *arg) struct generic_data *data = (struct generic_data *)malloc(sizeof(struct generic_data)); struct iphdr *ip_hdr = (struct iphdr *)(pkt_data + sizeof(struct ethhdr)); struct ethhdr *eth_hdr = (struct ethhdr *)pkt_data; + struct ifobject *ifobject = (struct ifobject *)arg; void *bufs = NULL; pthread_attr_setstacksize(&attr, THREAD_STACK); @@ -830,6 +896,9 @@ static void *worker_testapp_validate(void *arg) if (strcmp(((struct ifobject *)arg)->nsname, "")) switch_namespace(((struct ifobject *)arg)->ifdict_index); + + if (ifobject->obj && attach_xdp_program(ifobject->ifindex, ifobject->prog_fd) < 0) + exit_with_error(errno); } if (((struct ifobject *)arg)->fv.vector == tx) { @@ -1035,7 +1104,7 @@ int main(int argc, char **argv) ifaceconfig->src_port = UDP_SRC_PORT; for (int i = 0; i < MAX_INTERFACES; i++) { - ifdict[i] = (struct ifobject *)malloc(sizeof(struct ifobject)); + ifdict[i] = (struct ifobject *)calloc(1, sizeof(struct ifobject)); if (!ifdict[i]) exit_with_error(errno); diff --git a/tools/testing/selftests/bpf/xdpxceiver.h b/tools/testing/selftests/bpf/xdpxceiver.h index 61f595b6f200..3c15c2e95026 100644 --- a/tools/testing/selftests/bpf/xdpxceiver.h +++ b/tools/testing/selftests/bpf/xdpxceiver.h @@ -124,6 +124,8 @@ struct ifobject { u32 src_ip; u16 src_port; u16 dst_port; + int prog_fd; + struct bpf_object *obj; }; static struct ifobject *ifdict[MAX_INTERFACES]; From patchwork Tue Jan 19 15:50:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 12030229 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3058DC433E0 for ; Tue, 19 Jan 2021 15:53:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id ED9FC22211 for ; Tue, 19 Jan 2021 15:53:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391406AbhASPxN (ORCPT ); Tue, 19 Jan 2021 10:53:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42142 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731966AbhASPvu (ORCPT ); Tue, 19 Jan 2021 10:51:50 -0500 Received: from mail-lf1-x12e.google.com (mail-lf1-x12e.google.com [IPv6:2a00:1450:4864:20::12e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 807FAC061793; Tue, 19 Jan 2021 07:50:37 -0800 (PST) Received: by mail-lf1-x12e.google.com with SMTP id v67so29740998lfa.0; Tue, 19 Jan 2021 07:50:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=9OZSZNIxbrSBtFd1RdF4igF2Ob836WYZ5EsEIavMFtA=; b=cce4hvWLrVVDL+UM1CD6mm4bSEBfZoNNW1vy7PQwjynFowALsbD2RiQED8LfhWUxP6 WFOGDrB4ZDAeOOzngP8nblXi85hCgPDLpDTje1WJ7h84H8Qx/FSNj1PeHkzWtu3oV/5n I151avpBUlL6XbGlyO2auBQo943BWiI9p2awMd+OouWI6D5mKY5apAb1ahgF3VVnqbte vlCel7mcyhC1iMhz2Vuunyk5JrYuRV508J8zOpzKG3e5LuCM+FlPCJ5Tg76nN0XiuDa0 gIfUJhHi4xGYyVQEHiXEjrPLJgdrBynFpyGV1r8erHY+GzeurxVPd1CMPB/HcY3nb71e mowA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9OZSZNIxbrSBtFd1RdF4igF2Ob836WYZ5EsEIavMFtA=; b=OdTxbaTUE7pRPNyBpayiozXiaIK3NjNP0GmkcF7AV09NSQiFt6wjMOELSf5RPTPngt XYqyJxYOeUL4aZtCcPLeKj/0t26LFQWxGP8n/hP444Y60JqUO/wjqpdKJ2xqnwYDbNIc QCEyvI1BCcq7pXcSBIVxI3JZXBiVYIAjFzt/VZGZ8AT0sY9MSepXeTGS+I1aisojyPfG 5ASUi3gyxhf0NbRvT8/CybEuFUKx+RpP0rlv7wSOiIDEVoo2c89/85baJtu7NS2m1997 ED8f4jGq+rZXyfR5n30K1TOTZq5wI3VrMNjc2leqBmMoR7E9sko7DWiDLh7husRxjNBQ PvKw== X-Gm-Message-State: AOAM530AOBRo0o8QymaZl7bZTbg0N0BCIaeY7+o5Or/SCN/MO+pRdJHE kcoRLQTOcF2Lho060OyoByk= X-Google-Smtp-Source: ABdhPJyfyEeQZhddctLZ2axXDEf2714HjhdIZdJ64WKIvMLXbWayeK8CWk5RrnAdH5zCv8ThlnsVUA== X-Received: by 2002:a19:56:: with SMTP id 83mr2118830lfa.561.1611071436060; Tue, 19 Jan 2021 07:50:36 -0800 (PST) Received: from btopel-mobl.ger.intel.com (c213-102-90-208.bredband.comhem.se. [213.102.90.208]) by smtp.gmail.com with ESMTPSA id h20sm2309249lfc.239.2021.01.19.07.50.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Jan 2021 07:50:34 -0800 (PST) From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: ast@kernel.org, daniel@iogearbox.net, netdev@vger.kernel.org, bpf@vger.kernel.org Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , magnus.karlsson@intel.com, maciej.fijalkowski@intel.com, kuba@kernel.org, jonathan.lemon@gmail.com, maximmi@nvidia.com, davem@davemloft.net, hawk@kernel.org, john.fastabend@gmail.com, ciara.loftus@intel.com, weqaar.a.janjua@intel.com Subject: [PATCH bpf-next v2 8/8] selftest/bpf: remove a lot of ifobject casting in xdpxceiver Date: Tue, 19 Jan 2021 16:50:13 +0100 Message-Id: <20210119155013.154808-9-bjorn.topel@gmail.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210119155013.154808-1-bjorn.topel@gmail.com> References: <20210119155013.154808-1-bjorn.topel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Björn Töpel Instead of passing void * all over the place, let us pass the actual type (ifobject) and remove the void-ptr-to-type-ptr casting. Signed-off-by: Björn Töpel --- tools/testing/selftests/bpf/xdpxceiver.c | 87 ++++++++++++------------ 1 file changed, 42 insertions(+), 45 deletions(-) diff --git a/tools/testing/selftests/bpf/xdpxceiver.c b/tools/testing/selftests/bpf/xdpxceiver.c index fd0852fdd97d..7734fc87124f 100644 --- a/tools/testing/selftests/bpf/xdpxceiver.c +++ b/tools/testing/selftests/bpf/xdpxceiver.c @@ -225,14 +225,14 @@ static inline u16 udp_csum(u32 saddr, u32 daddr, u32 len, u8 proto, u16 *udp_pkt return csum_tcpudp_magic(saddr, daddr, len, proto, csum); } -static void gen_eth_hdr(void *data, struct ethhdr *eth_hdr) +static void gen_eth_hdr(struct ifobject *ifobject, struct ethhdr *eth_hdr) { - memcpy(eth_hdr->h_dest, ((struct ifobject *)data)->dst_mac, ETH_ALEN); - memcpy(eth_hdr->h_source, ((struct ifobject *)data)->src_mac, ETH_ALEN); + memcpy(eth_hdr->h_dest, ifobject->dst_mac, ETH_ALEN); + memcpy(eth_hdr->h_source, ifobject->src_mac, ETH_ALEN); eth_hdr->h_proto = htons(ETH_P_IP); } -static void gen_ip_hdr(void *data, struct iphdr *ip_hdr) +static void gen_ip_hdr(struct ifobject *ifobject, struct iphdr *ip_hdr) { ip_hdr->version = IP_PKT_VER; ip_hdr->ihl = 0x5; @@ -242,15 +242,15 @@ static void gen_ip_hdr(void *data, struct iphdr *ip_hdr) ip_hdr->frag_off = 0; ip_hdr->ttl = IPDEFTTL; ip_hdr->protocol = IPPROTO_UDP; - ip_hdr->saddr = ((struct ifobject *)data)->src_ip; - ip_hdr->daddr = ((struct ifobject *)data)->dst_ip; + ip_hdr->saddr = ifobject->src_ip; + ip_hdr->daddr = ifobject->dst_ip; ip_hdr->check = 0; } -static void gen_udp_hdr(void *data, void *arg, struct udphdr *udp_hdr) +static void gen_udp_hdr(void *data, struct ifobject *ifobject, struct udphdr *udp_hdr) { - udp_hdr->source = htons(((struct ifobject *)arg)->src_port); - udp_hdr->dest = htons(((struct ifobject *)arg)->dst_port); + udp_hdr->source = htons(ifobject->src_port); + udp_hdr->dest = htons(ifobject->dst_port); udp_hdr->len = htons(UDP_PKT_SIZE); memset32_htonl(pkt_data + PKT_HDR_SIZE, htonl(((struct generic_data *)data)->seqnum), UDP_PKT_DATA_SIZE); @@ -693,28 +693,27 @@ static inline int get_batch_size(int pkt_cnt) return opt_pkt_count - pkt_cnt; } -static void complete_tx_only_all(void *arg) +static void complete_tx_only_all(struct ifobject *ifobject) { bool pending; do { pending = false; - if (((struct ifobject *)arg)->xsk->outstanding_tx) { - complete_tx_only(((struct ifobject *) - arg)->xsk, BATCH_SIZE); - pending = !!((struct ifobject *)arg)->xsk->outstanding_tx; + if (ifobject->xsk->outstanding_tx) { + complete_tx_only(ifobject->xsk, BATCH_SIZE); + pending = !!ifobject->xsk->outstanding_tx; } } while (pending); } -static void tx_only_all(void *arg) +static void tx_only_all(struct ifobject *ifobject) { struct pollfd fds[MAX_SOCKS] = { }; u32 frame_nb = 0; int pkt_cnt = 0; int ret; - fds[0].fd = xsk_socket__fd(((struct ifobject *)arg)->xsk->xsk); + fds[0].fd = xsk_socket__fd(ifobject->xsk->xsk); fds[0].events = POLLOUT; while ((opt_pkt_count && pkt_cnt < opt_pkt_count) || !opt_pkt_count) { @@ -729,12 +728,12 @@ static void tx_only_all(void *arg) continue; } - tx_only(((struct ifobject *)arg)->xsk, &frame_nb, batch_size); + tx_only(ifobject->xsk, &frame_nb, batch_size); pkt_cnt += batch_size; } if (opt_pkt_count) - complete_tx_only_all(arg); + complete_tx_only_all(ifobject); } static void worker_pkt_dump(void) @@ -845,14 +844,14 @@ static void worker_pkt_validate(void) } } -static void thread_common_ops(void *arg, void *bufs, pthread_mutex_t *mutexptr, +static void thread_common_ops(struct ifobject *ifobject, void *bufs, pthread_mutex_t *mutexptr, atomic_int *spinningptr) { int ctr = 0; int ret; - xsk_configure_umem((struct ifobject *)arg, bufs, num_frames * XSK_UMEM__DEFAULT_FRAME_SIZE); - ret = xsk_configure_socket((struct ifobject *)arg); + xsk_configure_umem(ifobject, bufs, num_frames * XSK_UMEM__DEFAULT_FRAME_SIZE); + ret = xsk_configure_socket(ifobject); /* Retry Create Socket if it fails as xsk_socket__create() * is asynchronous @@ -863,9 +862,8 @@ static void thread_common_ops(void *arg, void *bufs, pthread_mutex_t *mutexptr, pthread_mutex_lock(mutexptr); while (ret && ctr < SOCK_RECONF_CTR) { atomic_store(spinningptr, 1); - xsk_configure_umem((struct ifobject *)arg, - bufs, num_frames * XSK_UMEM__DEFAULT_FRAME_SIZE); - ret = xsk_configure_socket((struct ifobject *)arg); + xsk_configure_umem(ifobject, bufs, num_frames * XSK_UMEM__DEFAULT_FRAME_SIZE); + ret = xsk_configure_socket(ifobject); usleep(USLEEP_MAX); ctr++; } @@ -894,52 +892,51 @@ static void *worker_testapp_validate(void *arg) if (bufs == MAP_FAILED) exit_with_error(errno); - if (strcmp(((struct ifobject *)arg)->nsname, "")) - switch_namespace(((struct ifobject *)arg)->ifdict_index); + if (strcmp(ifobject->nsname, "")) + switch_namespace(ifobject->ifdict_index); if (ifobject->obj && attach_xdp_program(ifobject->ifindex, ifobject->prog_fd) < 0) exit_with_error(errno); } - if (((struct ifobject *)arg)->fv.vector == tx) { + if (ifobject->fv.vector == tx) { int spinningrxctr = 0; if (!bidi_pass) - thread_common_ops(arg, bufs, &sync_mutex_tx, &spinning_tx); + thread_common_ops(ifobject, bufs, &sync_mutex_tx, &spinning_tx); while (atomic_load(&spinning_rx) && spinningrxctr < SOCK_RECONF_CTR) { spinningrxctr++; usleep(USLEEP_MAX); } - ksft_print_msg("Interface [%s] vector [Tx]\n", ((struct ifobject *)arg)->ifname); + ksft_print_msg("Interface [%s] vector [Tx]\n", ifobject->ifname); for (int i = 0; i < num_frames; i++) { /*send EOT frame */ if (i == (num_frames - 1)) data->seqnum = -1; else data->seqnum = i; - gen_udp_hdr((void *)data, (void *)arg, udp_hdr); - gen_ip_hdr((void *)arg, ip_hdr); + gen_udp_hdr((void *)data, ifobject, udp_hdr); + gen_ip_hdr(ifobject, ip_hdr); gen_udp_csum(udp_hdr, ip_hdr); - gen_eth_hdr((void *)arg, eth_hdr); - gen_eth_frame(((struct ifobject *)arg)->umem, - i * XSK_UMEM__DEFAULT_FRAME_SIZE); + gen_eth_hdr(ifobject, eth_hdr); + gen_eth_frame(ifobject->umem, i * XSK_UMEM__DEFAULT_FRAME_SIZE); } free(data); ksft_print_msg("Sending %d packets on interface %s\n", - (opt_pkt_count - 1), ((struct ifobject *)arg)->ifname); - tx_only_all(arg); - } else if (((struct ifobject *)arg)->fv.vector == rx) { + (opt_pkt_count - 1), ifobject->ifname); + tx_only_all(ifobject); + } else if (ifobject->fv.vector == rx) { struct pollfd fds[MAX_SOCKS] = { }; int ret; if (!bidi_pass) - thread_common_ops(arg, bufs, &sync_mutex_tx, &spinning_rx); + thread_common_ops(ifobject, bufs, &sync_mutex_tx, &spinning_rx); - ksft_print_msg("Interface [%s] vector [Rx]\n", ((struct ifobject *)arg)->ifname); - xsk_populate_fill_ring(((struct ifobject *)arg)->umem); + ksft_print_msg("Interface [%s] vector [Rx]\n", ifobject->ifname); + xsk_populate_fill_ring(ifobject->umem); TAILQ_INIT(&head); if (debug_pkt_dump) { @@ -948,7 +945,7 @@ static void *worker_testapp_validate(void *arg) exit_with_error(errno); } - fds[0].fd = xsk_socket__fd(((struct ifobject *)arg)->xsk->xsk); + fds[0].fd = xsk_socket__fd(ifobject->xsk->xsk); fds[0].events = POLLIN; pthread_mutex_lock(&sync_mutex); @@ -961,7 +958,7 @@ static void *worker_testapp_validate(void *arg) if (ret <= 0) continue; } - rx_pkt(((struct ifobject *)arg)->xsk, fds); + rx_pkt(ifobject->xsk, fds); worker_pkt_validate(); if (sigvar) @@ -969,15 +966,15 @@ static void *worker_testapp_validate(void *arg) } ksft_print_msg("Received %d packets on interface %s\n", - pkt_counter, ((struct ifobject *)arg)->ifname); + pkt_counter, ifobject->ifname); if (opt_teardown) ksft_print_msg("Destroying socket\n"); } if (!opt_bidi || (opt_bidi && bidi_pass)) { - xsk_socket__delete(((struct ifobject *)arg)->xsk->xsk); - (void)xsk_umem__delete(((struct ifobject *)arg)->umem->umem); + xsk_socket__delete(ifobject->xsk->xsk); + (void)xsk_umem__delete(ifobject->umem->umem); } pthread_exit(NULL); }