From patchwork Mon Jun 14 12:49:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 12318859 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22651C2B9F4 for ; Mon, 14 Jun 2021 12:50:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0069F61245 for ; Mon, 14 Jun 2021 12:50:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233452AbhFNMwl (ORCPT ); Mon, 14 Jun 2021 08:52:41 -0400 Received: from mail.kernel.org ([198.145.29.99]:43344 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233451AbhFNMwk (ORCPT ); Mon, 14 Jun 2021 08:52:40 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id D2C6161244; Mon, 14 Jun 2021 12:50:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1623675038; bh=ZULHSPrzZ0pILxvKsK1p2nxDiZcwA3jCkWQHpmSUrVE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kJXcgJPBePmxGChw/QHOS8P+uyMwEBXhqcQH6GGcoz1SUflV81vLHs2Qn1MLbOyyd H4//h/xm8/RoRcK7nbAdzWEqDnuCXjcxKfy24WJgfJeEEftuMBbm44QxdH6ZmYECUM qoKnMPWacr1JO4OIHWvumM9D4zbIuo53+5GOWgZ/4G75uUX1ZtRSBrTnIVb53Cuq2m 6K5Q+DnEDjqxOPrU1xdCEzJ40g6vnJEHQoBZJNEWQOdpef3jatyFVQzXRTJeRZftUp ITIrW9V388Dp6Lg/W3sVIoJ/8+GllZqq7AMY/1jO1W2HNe54uOYhahJOu8qUZlq3pj 4DnA5FKRy8Tfw== From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, sameehj@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, echaudro@redhat.com, jasowang@redhat.com, alexander.duyck@gmail.com, saeed@kernel.org, maciej.fijalkowski@intel.com, magnus.karlsson@intel.com, tirthendu.sarkar@intel.com Subject: [PATCH v9 bpf-next 01/14] net: skbuff: add data_len field to skb_shared_info Date: Mon, 14 Jun 2021 14:49:39 +0200 Message-Id: <8ad0d38259a678fb42245368f974f1a5cf47d68d.1623674025.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net data_len field will be used for paged frame len for xdp_buff/xdp_frame. This is a preliminary patch to properly support xdp-multibuff Signed-off-by: Lorenzo Bianconi --- include/linux/skbuff.h | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index dbf820a50a39..332ec56c200d 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -522,7 +522,10 @@ struct skb_shared_info { struct sk_buff *frag_list; struct skb_shared_hwtstamps hwtstamps; unsigned int gso_type; - u32 tskey; + union { + u32 tskey; + u32 data_len; + }; /* * Warning : all fields before dataref are cleared in __alloc_skb() From patchwork Mon Jun 14 12:49:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 12318861 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ABA42C2B9F4 for ; Mon, 14 Jun 2021 12:50:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9010561283 for ; Mon, 14 Jun 2021 12:50:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233451AbhFNMwp (ORCPT ); Mon, 14 Jun 2021 08:52:45 -0400 Received: from mail.kernel.org ([198.145.29.99]:43490 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233450AbhFNMwp (ORCPT ); Mon, 14 Jun 2021 08:52:45 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 67A7761159; Mon, 14 Jun 2021 12:50:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1623675042; bh=LdNTWLz70qNW6xggFZvWB1tvA8JIBpIoT454Talvfv0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bx2XsxMt0VTKYDjMFQr3xvKVNptG2ty4MQEf8MrBnn6O+jbihexYJWtZ6IBu/0enH PZFsbCXBYC/ytl6Pu+iwGB4MCXBfPwdaNVjAIMhKMLmfgVrT5pHJ5Cq0kZBxNX+MqN 7pJbng0/xDkReey3JajvVuEUdyOdyQSIRQjVJ5ChvRcnfBscAXDADqkLbedFvPxkg9 kLg1nMz9ASXmSDSOg3NHXCov39gUR0aoYKfYqP34tfNV6rhL7KKG71nJfjKznnhCdp 7kobMn/FiIVwQn+DReGjd5vMj8C2CEGVcIly6NrlXR+qp2G602W5h37mdRfgV+dlB9 c/AoltSUL+k6g== From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, sameehj@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, echaudro@redhat.com, jasowang@redhat.com, alexander.duyck@gmail.com, saeed@kernel.org, maciej.fijalkowski@intel.com, magnus.karlsson@intel.com, tirthendu.sarkar@intel.com Subject: [PATCH v9 bpf-next 02/14] xdp: introduce flags field in xdp_buff/xdp_frame Date: Mon, 14 Jun 2021 14:49:40 +0200 Message-Id: <1316f3ef2763ff4c02244fb726c61568c972514c.1623674025.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Introduce flags field in xdp_frame/xdp_buffer data structure to define additional buffer features. At the moment the only supported buffer feature is multi-buffer bit (mb). Multi-buffer bit is used to specify if this is a linear buffer (mb = 0) or a multi-buffer frame (mb = 1). In the latter case the shared_info area at the end of the first buffer will be properly initialized to link together subsequent buffers. Signed-off-by: Lorenzo Bianconi --- include/net/xdp.h | 31 ++++++++++++++++++++++++++++++- 1 file changed, 30 insertions(+), 1 deletion(-) diff --git a/include/net/xdp.h b/include/net/xdp.h index 5533f0ab2afc..de11d981fa44 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -66,6 +66,10 @@ struct xdp_txq_info { struct net_device *dev; }; +enum xdp_buff_flags { + XDP_FLAGS_MULTI_BUFF = BIT(0), /* non-linear xdp buff */ +}; + struct xdp_buff { void *data; void *data_end; @@ -74,13 +78,30 @@ struct xdp_buff { struct xdp_rxq_info *rxq; struct xdp_txq_info *txq; u32 frame_sz; /* frame size to deduce data_hard_end/reserved tailroom*/ + u16 flags; /* supported values defined in xdp_flags */ }; +static __always_inline bool xdp_buff_is_mb(struct xdp_buff *xdp) +{ + return !!(xdp->flags & XDP_FLAGS_MULTI_BUFF); +} + +static __always_inline void xdp_buff_set_mb(struct xdp_buff *xdp) +{ + xdp->flags |= XDP_FLAGS_MULTI_BUFF; +} + +static __always_inline void xdp_buff_clear_mb(struct xdp_buff *xdp) +{ + xdp->flags &= ~XDP_FLAGS_MULTI_BUFF; +} + static __always_inline void xdp_init_buff(struct xdp_buff *xdp, u32 frame_sz, struct xdp_rxq_info *rxq) { xdp->frame_sz = frame_sz; xdp->rxq = rxq; + xdp->flags = 0; } static __always_inline void @@ -117,13 +138,19 @@ struct xdp_frame { u16 headroom; u32 metasize:8; u32 frame_sz:24; + struct net_device *dev_rx; /* used by cpumap */ /* Lifetime of xdp_rxq_info is limited to NAPI/enqueue time, * while mem info is valid on remote CPU. */ struct xdp_mem_info mem; - struct net_device *dev_rx; /* used by cpumap */ + u16 flags; /* supported values defined in xdp_flags */ }; +static __always_inline bool xdp_frame_is_mb(struct xdp_frame *frame) +{ + return !!(frame->flags & XDP_FLAGS_MULTI_BUFF); +} + #define XDP_BULK_QUEUE_SIZE 16 struct xdp_frame_bulk { int count; @@ -180,6 +207,7 @@ void xdp_convert_frame_to_buff(struct xdp_frame *frame, struct xdp_buff *xdp) xdp->data_end = frame->data + frame->len; xdp->data_meta = frame->data - frame->metasize; xdp->frame_sz = frame->frame_sz; + xdp->flags = frame->flags; } static inline @@ -206,6 +234,7 @@ int xdp_update_frame_from_buff(struct xdp_buff *xdp, xdp_frame->headroom = headroom - sizeof(*xdp_frame); xdp_frame->metasize = metasize; xdp_frame->frame_sz = xdp->frame_sz; + xdp_frame->flags = xdp->flags; return 0; } From patchwork Mon Jun 14 12:49:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 12318863 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C102C2B9F4 for ; Mon, 14 Jun 2021 12:50:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 404386128A for ; Mon, 14 Jun 2021 12:50:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233501AbhFNMwu (ORCPT ); Mon, 14 Jun 2021 08:52:50 -0400 Received: from mail.kernel.org ([198.145.29.99]:43594 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233403AbhFNMwt (ORCPT ); Mon, 14 Jun 2021 08:52:49 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 0FD9961242; Mon, 14 Jun 2021 12:50:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1623675047; bh=xfvuZtjKE76Lu0VZWxAoomg7xwZPN+hMHNhuJVPNTVA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=eCQRWJGmClSPvBKi9cjaXdMHwffNGbzDpNxqMCt1YJTBPN/rcAG3gBKaprw7AMLpQ CvaFs5Ar/0hfR8X/NEe0XA4iimDul0QqpsLsa70ghoS18nI7MRXuFlW7+/T5wNEIZq CaUDwfnifp7tGxwFJgcb9MQj8ZgjIxrZ0/m98KtinbJN2g5dLHfWXZ1zbtHjjGiWit QszeVN7sAOHbFQNpTHJH1TbFbLJdTubVtSd69t4L36UzM14S5efJLkdP/Lu/4eNE2W BXPRD+7eCkn1meXDDjEK1P/xzpLpmEwMSFwnVTMQUcxz4zcpAN+FZSaIRRdyWNlqfg nVPQUJaXkb3Rw== From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, sameehj@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, echaudro@redhat.com, jasowang@redhat.com, alexander.duyck@gmail.com, saeed@kernel.org, maciej.fijalkowski@intel.com, magnus.karlsson@intel.com, tirthendu.sarkar@intel.com Subject: [PATCH v9 bpf-next 03/14] net: mvneta: update mb bit before passing the xdp buffer to eBPF layer Date: Mon, 14 Jun 2021 14:49:41 +0200 Message-Id: <0cb1664066f6a29d27264cb02ca69e812442bc53.1623674025.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Update multi-buffer bit (mb) in xdp_buff to notify XDP/eBPF layer and XDP remote drivers if this is a "non-linear" XDP buffer. Access xdp_shared_info only if xdp_buff mb is set. Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/marvell/mvneta.c | 21 ++++++++++++++++----- 1 file changed, 16 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index 7d5cd9bc6c99..f8993f7488b9 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -2041,9 +2041,14 @@ mvneta_xdp_put_buff(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, { int i; + if (likely(!xdp_buff_is_mb(xdp))) + goto out; + for (i = 0; i < sinfo->nr_frags; i++) page_pool_put_full_page(rxq->page_pool, skb_frag_page(&sinfo->frags[i]), true); + +out: page_pool_put_page(rxq->page_pool, virt_to_head_page(xdp->data), sync_len, true); } @@ -2245,7 +2250,6 @@ mvneta_swbm_rx_frame(struct mvneta_port *pp, int data_len = -MVNETA_MH_SIZE, len; struct net_device *dev = pp->dev; enum dma_data_direction dma_dir; - struct skb_shared_info *sinfo; if (*size > MVNETA_MAX_RX_BUF_SIZE) { len = MVNETA_MAX_RX_BUF_SIZE; @@ -2267,9 +2271,6 @@ mvneta_swbm_rx_frame(struct mvneta_port *pp, prefetch(data); xdp_prepare_buff(xdp, data, pp->rx_offset_correction + MVNETA_MH_SIZE, data_len, false); - - sinfo = xdp_get_shared_info_from_buff(xdp); - sinfo->nr_frags = 0; } static void @@ -2304,12 +2305,18 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp, skb_frag_size_set(frag, data_len); __skb_frag_set_page(frag, page); + if (!xdp_buff_is_mb(xdp)) { + xdp_sinfo->data_len = *size; + xdp_buff_set_mb(xdp); + } /* last fragment */ if (len == *size) { struct skb_shared_info *sinfo; sinfo = xdp_get_shared_info_from_buff(xdp); sinfo->nr_frags = xdp_sinfo->nr_frags; + sinfo->data_len = xdp_sinfo->data_len; + memcpy(sinfo->frags, xdp_sinfo->frags, sinfo->nr_frags * sizeof(skb_frag_t)); } @@ -2324,9 +2331,12 @@ mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, struct xdp_buff *xdp, u32 desc_status) { struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp); - int i, num_frags = sinfo->nr_frags; + int i, num_frags = 0; struct sk_buff *skb; + if (unlikely(xdp_buff_is_mb(xdp))) + num_frags = sinfo->nr_frags; + skb = build_skb(xdp->data_hard_start, PAGE_SIZE); if (!skb) return ERR_PTR(-ENOMEM); @@ -2398,6 +2408,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi, frame_sz = size - ETH_FCS_LEN; desc_status = rx_status; + xdp_buff_clear_mb(&xdp_buf); mvneta_swbm_rx_frame(pp, rx_desc, rxq, &xdp_buf, &size, page); } else { From patchwork Mon Jun 14 12:49:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 12318865 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75F3BC2B9F4 for ; Mon, 14 Jun 2021 12:50:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5F02A6134F for ; Mon, 14 Jun 2021 12:50:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233529AbhFNMwy (ORCPT ); Mon, 14 Jun 2021 08:52:54 -0400 Received: from mail.kernel.org ([198.145.29.99]:43680 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233403AbhFNMwy (ORCPT ); Mon, 14 Jun 2021 08:52:54 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 319E561283; Mon, 14 Jun 2021 12:50:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1623675051; bh=Nj35GSzUbIa7ljLgxeLcoq0EpeKH57VzntxLbVOg0Eo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Hj1YqgITtHxpoVUP0a7EUrh+oWAc7F/zAsikRT6Dhyp3JDN2cz9d0UOBIuIRZH3a5 OpNRTMCP35yKF4AVrTuKWgw8kiBvpMaWNjX8UJvA0STQJR8MQGu30Tm33q4Ib3UhoX 8DNQ629ZTB6SX80hRXFBXvfMfpDVbjr46xNs2NDItqqyZckMRU/mnFnc2wUZFYMSFd Hev/c/qGkpiYgitxNiasDE4Tl5YOW9k2Zm/uER1lx63dPikpIM96vwwAWKLWNzvbhA XEs1I/igwbr0FuVS70hn4u/xWVIUihlVXQJIzUMlWwU6+zTrDBPQgNerd3mubDpjCr AU1G4QZcBmY2g== From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, sameehj@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, echaudro@redhat.com, jasowang@redhat.com, alexander.duyck@gmail.com, saeed@kernel.org, maciej.fijalkowski@intel.com, magnus.karlsson@intel.com, tirthendu.sarkar@intel.com Subject: [PATCH v9 bpf-next 04/14] xdp: add multi-buff support to xdp_return_{buff/frame} Date: Mon, 14 Jun 2021 14:49:42 +0200 Message-Id: X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Take into account if the received xdp_buff/xdp_frame is non-linear recycling/returning the frame memory to the allocator or into xdp_frame_bulk. Introduce xdp_return_num_frags_from_buff to return a given number of fragments from a xdp multi-buff starting from the tail. Signed-off-by: Lorenzo Bianconi --- include/net/xdp.h | 18 ++++++++++++++-- net/core/xdp.c | 54 ++++++++++++++++++++++++++++++++++++++++++++++- 2 files changed, 69 insertions(+), 3 deletions(-) diff --git a/include/net/xdp.h b/include/net/xdp.h index de11d981fa44..935a6f83115f 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -275,10 +275,24 @@ void __xdp_release_frame(void *data, struct xdp_mem_info *mem); static inline void xdp_release_frame(struct xdp_frame *xdpf) { struct xdp_mem_info *mem = &xdpf->mem; + struct skb_shared_info *sinfo; + int i; /* Curr only page_pool needs this */ - if (mem->type == MEM_TYPE_PAGE_POOL) - __xdp_release_frame(xdpf->data, mem); + if (mem->type != MEM_TYPE_PAGE_POOL) + return; + + if (likely(!xdp_frame_is_mb(xdpf))) + goto out; + + sinfo = xdp_get_shared_info_from_frame(xdpf); + for (i = 0; i < sinfo->nr_frags; i++) { + struct page *page = skb_frag_page(&sinfo->frags[i]); + + __xdp_release_frame(page_address(page), mem); + } +out: + __xdp_release_frame(xdpf->data, mem); } int xdp_rxq_info_reg(struct xdp_rxq_info *xdp_rxq, diff --git a/net/core/xdp.c b/net/core/xdp.c index 725d20f1b100..f61c63115c95 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -375,12 +375,38 @@ static void __xdp_return(void *data, struct xdp_mem_info *mem, bool napi_direct, void xdp_return_frame(struct xdp_frame *xdpf) { + struct skb_shared_info *sinfo; + int i; + + if (likely(!xdp_frame_is_mb(xdpf))) + goto out; + + sinfo = xdp_get_shared_info_from_frame(xdpf); + for (i = 0; i < sinfo->nr_frags; i++) { + struct page *page = skb_frag_page(&sinfo->frags[i]); + + __xdp_return(page_address(page), &xdpf->mem, false, NULL); + } +out: __xdp_return(xdpf->data, &xdpf->mem, false, NULL); } EXPORT_SYMBOL_GPL(xdp_return_frame); void xdp_return_frame_rx_napi(struct xdp_frame *xdpf) { + struct skb_shared_info *sinfo; + int i; + + if (likely(!xdp_frame_is_mb(xdpf))) + goto out; + + sinfo = xdp_get_shared_info_from_frame(xdpf); + for (i = 0; i < sinfo->nr_frags; i++) { + struct page *page = skb_frag_page(&sinfo->frags[i]); + + __xdp_return(page_address(page), &xdpf->mem, true, NULL); + } +out: __xdp_return(xdpf->data, &xdpf->mem, true, NULL); } EXPORT_SYMBOL_GPL(xdp_return_frame_rx_napi); @@ -416,7 +442,7 @@ void xdp_return_frame_bulk(struct xdp_frame *xdpf, struct xdp_mem_allocator *xa; if (mem->type != MEM_TYPE_PAGE_POOL) { - __xdp_return(xdpf->data, &xdpf->mem, false, NULL); + xdp_return_frame(xdpf); return; } @@ -435,12 +461,38 @@ void xdp_return_frame_bulk(struct xdp_frame *xdpf, bq->xa = rhashtable_lookup(mem_id_ht, &mem->id, mem_id_rht_params); } + if (unlikely(xdp_frame_is_mb(xdpf))) { + struct skb_shared_info *sinfo; + int i; + + sinfo = xdp_get_shared_info_from_frame(xdpf); + for (i = 0; i < sinfo->nr_frags; i++) { + skb_frag_t *frag = &sinfo->frags[i]; + + bq->q[bq->count++] = skb_frag_address(frag); + if (bq->count == XDP_BULK_QUEUE_SIZE) + xdp_flush_frame_bulk(bq); + } + } bq->q[bq->count++] = xdpf->data; } EXPORT_SYMBOL_GPL(xdp_return_frame_bulk); void xdp_return_buff(struct xdp_buff *xdp) { + struct skb_shared_info *sinfo; + int i; + + if (likely(!xdp_buff_is_mb(xdp))) + goto out; + + sinfo = xdp_get_shared_info_from_buff(xdp); + for (i = 0; i < sinfo->nr_frags; i++) { + struct page *page = skb_frag_page(&sinfo->frags[i]); + + __xdp_return(page_address(page), &xdp->rxq->mem, true, xdp); + } +out: __xdp_return(xdp->data, &xdp->rxq->mem, true, xdp); } From patchwork Mon Jun 14 12:49:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 12318867 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3BB10C2B9F4 for ; Mon, 14 Jun 2021 12:50:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2656E61352 for ; Mon, 14 Jun 2021 12:50:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233561AbhFNMw7 (ORCPT ); Mon, 14 Jun 2021 08:52:59 -0400 Received: from mail.kernel.org ([198.145.29.99]:43762 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233554AbhFNMw6 (ORCPT ); Mon, 14 Jun 2021 08:52:58 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 676576128C; Mon, 14 Jun 2021 12:50:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1623675055; bh=rSkyy8RPdYuORUPpGBdisUj7oERza2TPcgmeeXLDYvY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KyPs3CdwmyMuBvN/bw8RosLQKjaEg0fm98zb4TFAz9bLFtZAspZMjLk9O5HyLI6OP oRGnlyGwAqNBaONmwCTpEd/7EyVEtesebE/1fmkFcWxPqxfu/4Xq/xwVxWGIefASXy 3tJx4gZ55gbSt+OW7hgQggutOcAsNLw7yM4SPmiPJvJB9qEOwivL9Clu3bb3j8BYhx ruCWzy/BSpkKiA4U1U5CVgkbtIDe3aNRfe+cTS+dFYd9p6rd/PTytiV/L6Qx+UjIqP l3PU4WacFo5s6PpW0z6Im7cHmUo62YMWVVIEFN5y9gSA1vmi5hfvXhr4S9wt7PFW81 +ohFBGZzMHUBA== From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, sameehj@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, echaudro@redhat.com, jasowang@redhat.com, alexander.duyck@gmail.com, saeed@kernel.org, maciej.fijalkowski@intel.com, magnus.karlsson@intel.com, tirthendu.sarkar@intel.com Subject: [PATCH v9 bpf-next 05/14] net: mvneta: add multi buffer support to XDP_TX Date: Mon, 14 Jun 2021 14:49:43 +0200 Message-Id: <718a8abc693806419c28110105f025d6aa94dbb7.1623674025.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Introduce the capability to map non-linear xdp buffer running mvneta_xdp_submit_frame() for XDP_TX and XDP_REDIRECT Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/marvell/mvneta.c | 112 +++++++++++++++++--------- 1 file changed, 76 insertions(+), 36 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index f8993f7488b9..ab57be38ba03 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -1860,8 +1860,8 @@ static void mvneta_txq_bufs_free(struct mvneta_port *pp, bytes_compl += buf->skb->len; pkts_compl++; dev_kfree_skb_any(buf->skb); - } else if (buf->type == MVNETA_TYPE_XDP_TX || - buf->type == MVNETA_TYPE_XDP_NDO) { + } else if ((buf->type == MVNETA_TYPE_XDP_TX || + buf->type == MVNETA_TYPE_XDP_NDO) && buf->xdpf) { if (napi && buf->type == MVNETA_TYPE_XDP_TX) xdp_return_frame_rx_napi(buf->xdpf); else @@ -2055,47 +2055,87 @@ mvneta_xdp_put_buff(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, static int mvneta_xdp_submit_frame(struct mvneta_port *pp, struct mvneta_tx_queue *txq, - struct xdp_frame *xdpf, bool dma_map) + struct xdp_frame *xdpf, int *nxmit_byte, bool dma_map) { - struct mvneta_tx_desc *tx_desc; - struct mvneta_tx_buf *buf; - dma_addr_t dma_addr; + struct skb_shared_info *sinfo = xdp_get_shared_info_from_frame(xdpf); + struct device *dev = pp->dev->dev.parent; + struct mvneta_tx_desc *tx_desc = NULL; + int i, num_frames = 1; + struct page *page; + + if (unlikely(xdp_frame_is_mb(xdpf))) + num_frames += sinfo->nr_frags; - if (txq->count >= txq->tx_stop_threshold) + if (txq->count + num_frames >= txq->size) return MVNETA_XDP_DROPPED; - tx_desc = mvneta_txq_next_desc_get(txq); + for (i = 0; i < num_frames; i++) { + struct mvneta_tx_buf *buf = &txq->buf[txq->txq_put_index]; + skb_frag_t *frag = NULL; + int len = xdpf->len; + dma_addr_t dma_addr; - buf = &txq->buf[txq->txq_put_index]; - if (dma_map) { - /* ndo_xdp_xmit */ - dma_addr = dma_map_single(pp->dev->dev.parent, xdpf->data, - xdpf->len, DMA_TO_DEVICE); - if (dma_mapping_error(pp->dev->dev.parent, dma_addr)) { - mvneta_txq_desc_put(txq); - return MVNETA_XDP_DROPPED; + if (unlikely(i)) { /* paged area */ + frag = &sinfo->frags[i - 1]; + len = skb_frag_size(frag); } - buf->type = MVNETA_TYPE_XDP_NDO; - } else { - struct page *page = virt_to_page(xdpf->data); - dma_addr = page_pool_get_dma_addr(page) + - sizeof(*xdpf) + xdpf->headroom; - dma_sync_single_for_device(pp->dev->dev.parent, dma_addr, - xdpf->len, DMA_BIDIRECTIONAL); - buf->type = MVNETA_TYPE_XDP_TX; + tx_desc = mvneta_txq_next_desc_get(txq); + if (dma_map) { + /* ndo_xdp_xmit */ + void *data; + + data = unlikely(frag) ? skb_frag_address(frag) + : xdpf->data; + dma_addr = dma_map_single(dev, data, len, + DMA_TO_DEVICE); + if (dma_mapping_error(dev, dma_addr)) { + mvneta_txq_desc_put(txq); + goto unmap; + } + + buf->type = MVNETA_TYPE_XDP_NDO; + } else { + page = unlikely(frag) ? skb_frag_page(frag) + : virt_to_page(xdpf->data); + dma_addr = page_pool_get_dma_addr(page); + if (unlikely(frag)) + dma_addr += skb_frag_off(frag); + else + dma_addr += sizeof(*xdpf) + xdpf->headroom; + dma_sync_single_for_device(dev, dma_addr, len, + DMA_BIDIRECTIONAL); + buf->type = MVNETA_TYPE_XDP_TX; + } + buf->xdpf = unlikely(i) ? NULL : xdpf; + + tx_desc->command = unlikely(i) ? 0 : MVNETA_TXD_F_DESC; + tx_desc->buf_phys_addr = dma_addr; + tx_desc->data_size = len; + *nxmit_byte += len; + + mvneta_txq_inc_put(txq); } - buf->xdpf = xdpf; - tx_desc->command = MVNETA_TXD_FLZ_DESC; - tx_desc->buf_phys_addr = dma_addr; - tx_desc->data_size = xdpf->len; + /*last descriptor */ + if (likely(tx_desc)) + tx_desc->command |= MVNETA_TXD_L_DESC | MVNETA_TXD_Z_PAD; - mvneta_txq_inc_put(txq); - txq->pending++; - txq->count++; + txq->pending += num_frames; + txq->count += num_frames; return MVNETA_XDP_TX; + +unmap: + for (i--; i >= 0; i--) { + mvneta_txq_desc_put(txq); + tx_desc = txq->descs + txq->next_desc_to_proc; + dma_unmap_single(dev, tx_desc->buf_phys_addr, + tx_desc->data_size, + DMA_TO_DEVICE); + } + + return MVNETA_XDP_DROPPED; } static int @@ -2104,8 +2144,8 @@ mvneta_xdp_xmit_back(struct mvneta_port *pp, struct xdp_buff *xdp) struct mvneta_pcpu_stats *stats = this_cpu_ptr(pp->stats); struct mvneta_tx_queue *txq; struct netdev_queue *nq; + int cpu, nxmit_byte = 0; struct xdp_frame *xdpf; - int cpu; u32 ret; xdpf = xdp_convert_buff_to_frame(xdp); @@ -2117,10 +2157,10 @@ mvneta_xdp_xmit_back(struct mvneta_port *pp, struct xdp_buff *xdp) nq = netdev_get_tx_queue(pp->dev, txq->id); __netif_tx_lock(nq, cpu); - ret = mvneta_xdp_submit_frame(pp, txq, xdpf, false); + ret = mvneta_xdp_submit_frame(pp, txq, xdpf, &nxmit_byte, false); if (ret == MVNETA_XDP_TX) { u64_stats_update_begin(&stats->syncp); - stats->es.ps.tx_bytes += xdpf->len; + stats->es.ps.tx_bytes += nxmit_byte; stats->es.ps.tx_packets++; stats->es.ps.xdp_tx++; u64_stats_update_end(&stats->syncp); @@ -2159,11 +2199,11 @@ mvneta_xdp_xmit(struct net_device *dev, int num_frame, __netif_tx_lock(nq, cpu); for (i = 0; i < num_frame; i++) { - ret = mvneta_xdp_submit_frame(pp, txq, frames[i], true); + ret = mvneta_xdp_submit_frame(pp, txq, frames[i], &nxmit_byte, + true); if (ret != MVNETA_XDP_TX) break; - nxmit_byte += frames[i]->len; nxmit++; } From patchwork Mon Jun 14 12:49:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 12318869 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48AABC2B9F4 for ; Mon, 14 Jun 2021 12:51:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2E1B561378 for ; Mon, 14 Jun 2021 12:51:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233593AbhFNMxC (ORCPT ); Mon, 14 Jun 2021 08:53:02 -0400 Received: from mail.kernel.org ([198.145.29.99]:43850 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233554AbhFNMxC (ORCPT ); Mon, 14 Jun 2021 08:53:02 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 7633D6134F; Mon, 14 Jun 2021 12:50:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1623675059; bh=mnga0PPw6gBLpUsVl6ojsAX0mRejzoCGvDtYlRVZF7o=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pETRHBJcVq5tvNOQvvWZnKJYWEuVqIVZf7U4mGmXbdVdZDJW+uU6r/4C77aqAHc61 hXDnYr1h2eJ6ByuKTjpOfJB2hKjzAbQZmQBQHHzTHJKyuACCdBPe6EYN3emDp80YS/ sSfv/nfo7u9ubRNiJxq/SzVKp2BA+RPjS0DQJW08Od3Q9Mlvn8+OTiwkJ/7hiwcaC0 YIhrK7zUV9WMjkFFo4tQJfUnt62jkIwa6fT3T3isNSNNfogg34NFlRyvWSBfYf09SF a7wDJ71eVPjgH4wTusghZhm6Dlox3JsZwfrvXvF8KKLwvVvKRZQQ67MQy7lL6tqgKe 9DALIV2G74CCA== From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, sameehj@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, echaudro@redhat.com, jasowang@redhat.com, alexander.duyck@gmail.com, saeed@kernel.org, maciej.fijalkowski@intel.com, magnus.karlsson@intel.com, tirthendu.sarkar@intel.com Subject: [PATCH v9 bpf-next 06/14] net: mvneta: enable jumbo frames for XDP Date: Mon, 14 Jun 2021 14:49:44 +0200 Message-Id: X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Enable the capability to receive jumbo frames even if the interface is running in XDP mode Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/marvell/mvneta.c | 10 ---------- 1 file changed, 10 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index ab57be38ba03..505bec673ff7 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -3780,11 +3780,6 @@ static int mvneta_change_mtu(struct net_device *dev, int mtu) mtu = ALIGN(MVNETA_RX_PKT_SIZE(mtu), 8); } - if (pp->xdp_prog && mtu > MVNETA_MAX_RX_BUF_SIZE) { - netdev_info(dev, "Illegal MTU value %d for XDP mode\n", mtu); - return -EINVAL; - } - dev->mtu = mtu; if (!netif_running(dev)) { @@ -4486,11 +4481,6 @@ static int mvneta_xdp_setup(struct net_device *dev, struct bpf_prog *prog, struct mvneta_port *pp = netdev_priv(dev); struct bpf_prog *old_prog; - if (prog && dev->mtu > MVNETA_MAX_RX_BUF_SIZE) { - NL_SET_ERR_MSG_MOD(extack, "MTU too large for XDP"); - return -EOPNOTSUPP; - } - if (pp->bm_priv) { NL_SET_ERR_MSG_MOD(extack, "Hardware Buffer Management not supported on XDP"); From patchwork Mon Jun 14 12:49:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 12318871 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28C55C4743C for ; Mon, 14 Jun 2021 12:51:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0E51B613EE for ; Mon, 14 Jun 2021 12:51:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233716AbhFNMxJ (ORCPT ); Mon, 14 Jun 2021 08:53:09 -0400 Received: from mail.kernel.org ([198.145.29.99]:43938 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233673AbhFNMxG (ORCPT ); Mon, 14 Jun 2021 08:53:06 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 7C8CB61350; Mon, 14 Jun 2021 12:51:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1623675063; bh=1UOgEyiGSzIUDPSteKTTfU0RTqIkE7EXXb81wVVqdFg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tgOdreFYTddnBvWkpARrfeHfnZOxmhTq/Ym8m9WbcGFx+dJeJ6zvqdOyB6fdVSopO ezbLtvIfdImYKxfdcCthNShWrY1mMZAmxH9opZ3tDxITGOMEsDOS0bFLUoIA5fd9j8 v05tQKAUYTi37IOIkXMJkFyf0fXoVtd2GXlTdsL+yzdVLrbQXvEL/WHIRm/TND629O Zp5FpTJhBSzkadwDZD7VhgHlJqfBV+kuIKFZvYahAuNbacs/Fm0ZOj8HffK0ZQy5Nn TwyDReL4rHugLz9Qp7TcMlqI4iUM62RJj/Foqpg4b0kncWCJmAbx+11F03fH9jNPQt 5YmeSXlmgNfug== From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, sameehj@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, echaudro@redhat.com, jasowang@redhat.com, alexander.duyck@gmail.com, saeed@kernel.org, maciej.fijalkowski@intel.com, magnus.karlsson@intel.com, tirthendu.sarkar@intel.com Subject: [PATCH v9 bpf-next 07/14] net: xdp: add multi-buff support to xdp_build_skb_from_frame Date: Mon, 14 Jun 2021 14:49:45 +0200 Message-Id: <7f61f8f7d38cf819383db739c14c874ccd3b53e2.1623674025.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Introduce xdp multi-buff support to __xdp_build_skb_from_frame/xdp_build_skb_from_frame utility routines. Signed-off-by: Lorenzo Bianconi --- net/core/xdp.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/net/core/xdp.c b/net/core/xdp.c index f61c63115c95..71bedf6049a1 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -582,9 +582,15 @@ struct sk_buff *__xdp_build_skb_from_frame(struct xdp_frame *xdpf, struct sk_buff *skb, struct net_device *dev) { + struct skb_shared_info *sinfo = xdp_get_shared_info_from_frame(xdpf); unsigned int headroom, frame_size; + int i, num_frags = 0; void *hard_start; + /* xdp multi-buff frame */ + if (unlikely(xdp_frame_is_mb(xdpf))) + num_frags = sinfo->nr_frags; + /* Part of headroom was reserved to xdpf */ headroom = sizeof(*xdpf) + xdpf->headroom; @@ -603,6 +609,13 @@ struct sk_buff *__xdp_build_skb_from_frame(struct xdp_frame *xdpf, if (xdpf->metasize) skb_metadata_set(skb, xdpf->metasize); + for (i = 0; i < num_frags; i++) + skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, + skb_frag_page(&sinfo->frags[i]), + skb_frag_off(&sinfo->frags[i]), + skb_frag_size(&sinfo->frags[i]), + xdpf->frame_sz); + /* Essential SKB info: protocol and skb->dev */ skb->protocol = eth_type_trans(skb, dev); From patchwork Mon Jun 14 12:49:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 12318873 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6054C2B9F4 for ; Mon, 14 Jun 2021 12:51:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CEA15613F9 for ; Mon, 14 Jun 2021 12:51:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233843AbhFNMxV (ORCPT ); Mon, 14 Jun 2021 08:53:21 -0400 Received: from mail.kernel.org ([198.145.29.99]:44032 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233744AbhFNMxK (ORCPT ); Mon, 14 Jun 2021 08:53:10 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 9853B61360; Mon, 14 Jun 2021 12:51:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1623675067; bh=cQz7Na97KwSh/um9HIF4VqPaAAMWO3DjgmpQWQnAxlw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lOi4/vPs9OeY/00Leblr8PxpRlZLb0+YX34OVL3ssUnqCjyrLL4w5pwjIIyNF5vSL Xc/DLZaI/towEEz0WmC2N3CBK/edEFzcp4R5eVQv3aXpKQqOEXs/qsDgxCB3hlrxg8 obYxcj25DarmxMTV9Z9hVsJDcejWb5M7+at1dm1XSrqQm3QBw1k0haQcGCIDP0Glsq P2zZYQieXxfn9KzkHAlQnGQwYUYOvUpMhYjFLd1xXltpVo8sE1/4hy32mwzLj5qb7v NNbTGV04GCm55Ya2Yr6XergyZxk/+fnhgikIDAKMXr1NHt2eK2WQKwOLcZj/dJugue 6O2T5+dJk3GWw== From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, sameehj@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, echaudro@redhat.com, jasowang@redhat.com, alexander.duyck@gmail.com, saeed@kernel.org, maciej.fijalkowski@intel.com, magnus.karlsson@intel.com, tirthendu.sarkar@intel.com Subject: [PATCH v9 bpf-next 08/14] bpf: add multi-buff support to the bpf_xdp_adjust_tail() API Date: Mon, 14 Jun 2021 14:49:46 +0200 Message-Id: <863f4934d251f44ad85a6be08b3737fac74f9b5a.1623674025.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Eelco Chaudron This change adds support for tail growing and shrinking for XDP multi-buff. Signed-off-by: Eelco Chaudron Signed-off-by: Lorenzo Bianconi --- include/net/xdp.h | 7 ++++++ net/core/filter.c | 62 +++++++++++++++++++++++++++++++++++++++++++++++ net/core/xdp.c | 5 ++-- 3 files changed, 72 insertions(+), 2 deletions(-) diff --git a/include/net/xdp.h b/include/net/xdp.h index 935a6f83115f..3525801c6ed5 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -132,6 +132,11 @@ xdp_get_shared_info_from_buff(struct xdp_buff *xdp) return (struct skb_shared_info *)xdp_data_hard_end(xdp); } +static inline unsigned int xdp_get_frag_tailroom(const skb_frag_t *frag) +{ + return PAGE_SIZE - skb_frag_size(frag) - skb_frag_off(frag); +} + struct xdp_frame { void *data; u16 len; @@ -259,6 +264,8 @@ struct xdp_frame *xdp_convert_buff_to_frame(struct xdp_buff *xdp) return xdp_frame; } +void __xdp_return(void *data, struct xdp_mem_info *mem, bool napi_direct, + struct xdp_buff *xdp); void xdp_return_frame(struct xdp_frame *xdpf); void xdp_return_frame_rx_napi(struct xdp_frame *xdpf); void xdp_return_buff(struct xdp_buff *xdp); diff --git a/net/core/filter.c b/net/core/filter.c index caa88955562e..05f574a3d690 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -3859,11 +3859,73 @@ static const struct bpf_func_proto bpf_xdp_adjust_head_proto = { .arg2_type = ARG_ANYTHING, }; +static int bpf_xdp_mb_adjust_tail(struct xdp_buff *xdp, int offset) +{ + struct skb_shared_info *sinfo; + + if (unlikely(!xdp_buff_is_mb(xdp))) + return -EINVAL; + + sinfo = xdp_get_shared_info_from_buff(xdp); + if (offset >= 0) { + skb_frag_t *frag = &sinfo->frags[sinfo->nr_frags - 1]; + int size; + + if (unlikely(offset > xdp_get_frag_tailroom(frag))) + return -EINVAL; + + size = skb_frag_size(frag); + memset(skb_frag_address(frag) + size, 0, offset); + skb_frag_size_set(frag, size + offset); + sinfo->data_len += offset; + } else { + int i, n_frags_free = 0, len_free = 0; + + offset = abs(offset); + if (unlikely(offset > ((int)(xdp->data_end - xdp->data) + + sinfo->data_len - ETH_HLEN))) + return -EINVAL; + + for (i = sinfo->nr_frags - 1; i >= 0 && offset > 0; i--) { + skb_frag_t *frag = &sinfo->frags[i]; + int size = skb_frag_size(frag); + int shrink = min_t(int, offset, size); + + len_free += shrink; + offset -= shrink; + + if (unlikely(size == shrink)) { + struct page *page = skb_frag_page(frag); + + __xdp_return(page_address(page), &xdp->rxq->mem, + false, NULL); + n_frags_free++; + } else { + skb_frag_size_set(frag, size - shrink); + break; + } + } + sinfo->nr_frags -= n_frags_free; + sinfo->data_len -= len_free; + + if (unlikely(!sinfo->nr_frags)) + xdp_buff_clear_mb(xdp); + + if (unlikely(offset > 0)) + xdp->data_end -= offset; + } + + return 0; +} + BPF_CALL_2(bpf_xdp_adjust_tail, struct xdp_buff *, xdp, int, offset) { void *data_hard_end = xdp_data_hard_end(xdp); /* use xdp->frame_sz */ void *data_end = xdp->data_end + offset; + if (unlikely(xdp_buff_is_mb(xdp))) + return bpf_xdp_mb_adjust_tail(xdp, offset); + /* Notice that xdp_data_hard_end have reserved some tailroom */ if (unlikely(data_end > data_hard_end)) return -EINVAL; diff --git a/net/core/xdp.c b/net/core/xdp.c index 71bedf6049a1..ffd70d3e9e5d 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -338,8 +338,8 @@ EXPORT_SYMBOL_GPL(xdp_rxq_info_reg_mem_model); * is used for those calls sites. Thus, allowing for faster recycling * of xdp_frames/pages in those cases. */ -static void __xdp_return(void *data, struct xdp_mem_info *mem, bool napi_direct, - struct xdp_buff *xdp) +void __xdp_return(void *data, struct xdp_mem_info *mem, bool napi_direct, + struct xdp_buff *xdp) { struct xdp_mem_allocator *xa; struct page *page; @@ -372,6 +372,7 @@ static void __xdp_return(void *data, struct xdp_mem_info *mem, bool napi_direct, break; } } +EXPORT_SYMBOL_GPL(__xdp_return); void xdp_return_frame(struct xdp_frame *xdpf) { From patchwork Mon Jun 14 12:49:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 12318875 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74DDFC4743C for ; Mon, 14 Jun 2021 12:51:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5FF18613F9 for ; Mon, 14 Jun 2021 12:51:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233895AbhFNMxa (ORCPT ); Mon, 14 Jun 2021 08:53:30 -0400 Received: from mail.kernel.org ([198.145.29.99]:44008 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233770AbhFNMxR (ORCPT ); Mon, 14 Jun 2021 08:53:17 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id DCD7B613E9; Mon, 14 Jun 2021 12:51:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1623675074; bh=4vTxFn/Yu97k12Mq5hCxd9c4FBn5ZgFNpfRixHf1T3E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZQ7wKQ5O953AuNE8oa2MovOgZJOC6WK6+uH02T+jNuQ45Ns0e21BCpcgXQYEppAqf BiqIYCOGunELGIPUCXYjaK/C56gTZJ515Zn2iCyd1EOdGmIG/3SnvVLPcHZskKUeve OjkWg2PZm/4n+UdGgOLgPgXejNjjRgdo8eReP727cjyUrOt+F0wod/hsRhzCeAcaRu YeqNVbMs9bj4V90vC519CnfRohpXyvaHvdaKj61p5lTjN+6ebbtCi1XxlPwZcH4A5R kFij+IIvXQF7uIExD+nDu866gal8+Ozw6ISnTFN4EIkstQh5ORpYswR0GzO2IsNIy7 PyzpV0qfDlsMA== From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, sameehj@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, echaudro@redhat.com, jasowang@redhat.com, alexander.duyck@gmail.com, saeed@kernel.org, maciej.fijalkowski@intel.com, magnus.karlsson@intel.com, tirthendu.sarkar@intel.com Subject: [PATCH v9 bpf-next 09/14] bpf: introduce bpf_xdp_get_buff_len helper Date: Mon, 14 Jun 2021 14:49:47 +0200 Message-Id: X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Introduce bpf_xdp_get_buff_len helper in order to return the xdp buffer total size (linear and paged area) Signed-off-by: Lorenzo Bianconi --- include/uapi/linux/bpf.h | 7 +++++++ net/core/filter.c | 23 +++++++++++++++++++++++ tools/include/uapi/linux/bpf.h | 7 +++++++ 3 files changed, 37 insertions(+) diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 2c1ba70abbf1..0116c0c569fe 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -4778,6 +4778,12 @@ union bpf_attr { * Execute close syscall for given FD. * Return * A syscall result. + * + * int bpf_xdp_get_buff_len(struct xdp_buff *xdp_md) + * Description + * Get the total size of a given xdp buff (linear and paged area) + * Return + * The total size of a given xdp buffer. */ #define __BPF_FUNC_MAPPER(FN) \ FN(unspec), \ @@ -4949,6 +4955,7 @@ union bpf_attr { FN(sys_bpf), \ FN(btf_find_by_name_kind), \ FN(sys_close), \ + FN(xdp_get_buff_len), \ /* */ /* integer value in 'imm' field of BPF_CALL instruction selects which helper diff --git a/net/core/filter.c b/net/core/filter.c index 05f574a3d690..b0855f2d4726 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -3918,6 +3918,27 @@ static int bpf_xdp_mb_adjust_tail(struct xdp_buff *xdp, int offset) return 0; } +BPF_CALL_1(bpf_xdp_get_buff_len, struct xdp_buff*, xdp) +{ + int data_len = 0; + + if (unlikely(xdp_buff_is_mb(xdp))) { + struct skb_shared_info *sinfo; + + sinfo = xdp_get_shared_info_from_buff(xdp); + data_len = sinfo->data_len; + } + + return (xdp->data_end - xdp->data) + data_len; +} + +const struct bpf_func_proto bpf_xdp_get_buff_len_proto = { + .func = bpf_xdp_get_buff_len, + .gpl_only = false, + .ret_type = RET_INTEGER, + .arg1_type = ARG_PTR_TO_CTX, +}; + BPF_CALL_2(bpf_xdp_adjust_tail, struct xdp_buff *, xdp, int, offset) { void *data_hard_end = xdp_data_hard_end(xdp); /* use xdp->frame_sz */ @@ -7454,6 +7475,8 @@ xdp_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) return &bpf_xdp_redirect_map_proto; case BPF_FUNC_xdp_adjust_tail: return &bpf_xdp_adjust_tail_proto; + case BPF_FUNC_xdp_get_buff_len: + return &bpf_xdp_get_buff_len_proto; case BPF_FUNC_fib_lookup: return &bpf_xdp_fib_lookup_proto; case BPF_FUNC_check_mtu: diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index 2c1ba70abbf1..0116c0c569fe 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -4778,6 +4778,12 @@ union bpf_attr { * Execute close syscall for given FD. * Return * A syscall result. + * + * int bpf_xdp_get_buff_len(struct xdp_buff *xdp_md) + * Description + * Get the total size of a given xdp buff (linear and paged area) + * Return + * The total size of a given xdp buffer. */ #define __BPF_FUNC_MAPPER(FN) \ FN(unspec), \ @@ -4949,6 +4955,7 @@ union bpf_attr { FN(sys_bpf), \ FN(btf_find_by_name_kind), \ FN(sys_close), \ + FN(xdp_get_buff_len), \ /* */ /* integer value in 'imm' field of BPF_CALL instruction selects which helper From patchwork Mon Jun 14 12:49:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 12318877 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9658EC2B9F4 for ; Mon, 14 Jun 2021 12:51:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 761C66109F for ; Mon, 14 Jun 2021 12:51:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233793AbhFNMxp (ORCPT ); Mon, 14 Jun 2021 08:53:45 -0400 Received: from mail.kernel.org ([198.145.29.99]:44202 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233740AbhFNMxV (ORCPT ); Mon, 14 Jun 2021 08:53:21 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 00EDF613CD; Mon, 14 Jun 2021 12:51:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1623675078; bh=s9Rn93MKq/iPUN5qoBunQMJx9PHnU06N5M0lfP/tGFc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=T7DSQg8k/MQWuSyFvEW03IoTKaBpfxnXRdBe+q826w7WXMV089WnXqkwZ9sPLyoU8 Ko5tYqUMekKje8c5DfBEq/y0BnpNXZtCIyP90iH1gSEXLW1GtXwZEfrJBnXbmIxThX EHJQHxooiTd57OnMxamgbb39iY5jJHN8DEDsOvqgWihEtjsccSXUlX/6ICik6JdOPS w89LGFGCYqbpRKz7MzVawuqunXAApqm6HrvmhZtF/uYJOes+1RRL2jrxxBgd1OKvSc m/jsowtNMXYBZVJZPScaLmJ2C0/qJd3LavXDLJCCQ+5cSvf5Z+1KoSC4dIZGXJu0Xj i28iMCgt78qbg== From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, sameehj@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, echaudro@redhat.com, jasowang@redhat.com, alexander.duyck@gmail.com, saeed@kernel.org, maciej.fijalkowski@intel.com, magnus.karlsson@intel.com, tirthendu.sarkar@intel.com Subject: [PATCH v9 bpf-next 10/14] bpf: add multi-buffer support to xdp copy helpers Date: Mon, 14 Jun 2021 14:49:48 +0200 Message-Id: <4d2a74f7389eb51e2b43c63df76d9cd76f57384c.1623674025.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Eelco Chaudron This patch adds support for multi-buffer for the following helpers: - bpf_xdp_output() - bpf_perf_event_output() Signed-off-by: Eelco Chaudron Signed-off-by: Lorenzo Bianconi --- kernel/trace/bpf_trace.c | 3 + net/core/filter.c | 72 +++++++++- .../selftests/bpf/prog_tests/xdp_bpf2bpf.c | 127 ++++++++++++------ .../selftests/bpf/progs/test_xdp_bpf2bpf.c | 2 +- 4 files changed, 160 insertions(+), 44 deletions(-) diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index d2d7cf6cfe83..ee926ec64f78 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -1365,6 +1365,7 @@ static const struct bpf_func_proto bpf_perf_event_output_proto_raw_tp = { extern const struct bpf_func_proto bpf_skb_output_proto; extern const struct bpf_func_proto bpf_xdp_output_proto; +extern const struct bpf_func_proto bpf_xdp_get_buff_len_trace_proto; BPF_CALL_3(bpf_get_stackid_raw_tp, struct bpf_raw_tracepoint_args *, args, struct bpf_map *, map, u64, flags) @@ -1460,6 +1461,8 @@ tracing_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) return &bpf_sock_from_file_proto; case BPF_FUNC_get_socket_cookie: return &bpf_get_socket_ptr_cookie_proto; + case BPF_FUNC_xdp_get_buff_len: + return &bpf_xdp_get_buff_len_trace_proto; #endif case BPF_FUNC_seq_printf: return prog->expected_attach_type == BPF_TRACE_ITER ? diff --git a/net/core/filter.c b/net/core/filter.c index b0855f2d4726..f7211b7908a9 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -3939,6 +3939,15 @@ const struct bpf_func_proto bpf_xdp_get_buff_len_proto = { .arg1_type = ARG_PTR_TO_CTX, }; +BTF_ID_LIST_SINGLE(bpf_xdp_get_buff_len_bpf_ids, struct, xdp_buff) + +const struct bpf_func_proto bpf_xdp_get_buff_len_trace_proto = { + .func = bpf_xdp_get_buff_len, + .gpl_only = false, + .arg1_type = ARG_PTR_TO_BTF_ID, + .arg1_btf_id = &bpf_xdp_get_buff_len_bpf_ids[0], +}; + BPF_CALL_2(bpf_xdp_adjust_tail, struct xdp_buff *, xdp, int, offset) { void *data_hard_end = xdp_data_hard_end(xdp); /* use xdp->frame_sz */ @@ -4606,10 +4615,56 @@ static const struct bpf_func_proto bpf_sk_ancestor_cgroup_id_proto = { }; #endif -static unsigned long bpf_xdp_copy(void *dst_buff, const void *src_buff, +static unsigned long bpf_xdp_copy(void *dst_buff, const void *ctx, unsigned long off, unsigned long len) { - memcpy(dst_buff, src_buff + off, len); + struct xdp_buff *xdp = (struct xdp_buff *)ctx; + struct skb_shared_info *sinfo; + unsigned long base_len; + + if (likely(!xdp_buff_is_mb(xdp))) { + memcpy(dst_buff, xdp->data + off, len); + return 0; + } + + base_len = xdp->data_end - xdp->data; + sinfo = xdp_get_shared_info_from_buff(xdp); + do { + const void *src_buff = NULL; + unsigned long copy_len = 0; + + if (off < base_len) { + src_buff = xdp->data + off; + copy_len = min(len, base_len - off); + } else { + unsigned long frag_off_total = base_len; + int i; + + for (i = 0; i < sinfo->nr_frags; i++) { + skb_frag_t *frag = &sinfo->frags[i]; + unsigned long frag_len, frag_off; + + frag_len = skb_frag_size(frag); + frag_off = off - frag_off_total; + if (frag_off < frag_len) { + src_buff = skb_frag_address(frag) + + frag_off; + copy_len = min(len, + frag_len - frag_off); + break; + } + frag_off_total += frag_len; + } + } + if (!src_buff) + break; + + memcpy(dst_buff, src_buff, copy_len); + off += copy_len; + len -= copy_len; + dst_buff += copy_len; + } while (len); + return 0; } @@ -4621,10 +4676,19 @@ BPF_CALL_5(bpf_xdp_event_output, struct xdp_buff *, xdp, struct bpf_map *, map, if (unlikely(flags & ~(BPF_F_CTXLEN_MASK | BPF_F_INDEX_MASK))) return -EINVAL; if (unlikely(!xdp || - xdp_size > (unsigned long)(xdp->data_end - xdp->data))) + (likely(!xdp_buff_is_mb(xdp)) && + xdp_size > (unsigned long)(xdp->data_end - xdp->data)))) return -EFAULT; + if (unlikely(xdp_buff_is_mb(xdp))) { + struct skb_shared_info *sinfo; + + sinfo = xdp_get_shared_info_from_buff(xdp); + if (unlikely(xdp_size > ((int)(xdp->data_end - xdp->data) + + sinfo->data_len))) + return -EFAULT; + } - return bpf_event_output(map, flags, meta, meta_size, xdp->data, + return bpf_event_output(map, flags, meta, meta_size, xdp, xdp_size, bpf_xdp_copy); } diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_bpf2bpf.c b/tools/testing/selftests/bpf/prog_tests/xdp_bpf2bpf.c index 3bd5904b4db5..cc9be5912be8 100644 --- a/tools/testing/selftests/bpf/prog_tests/xdp_bpf2bpf.c +++ b/tools/testing/selftests/bpf/prog_tests/xdp_bpf2bpf.c @@ -10,11 +10,20 @@ struct meta { int pkt_len; }; +struct test_ctx_s { + bool passed; + int pkt_size; +}; + +struct test_ctx_s test_ctx; + static void on_sample(void *ctx, int cpu, void *data, __u32 size) { - int duration = 0; struct meta *meta = (struct meta *)data; struct ipv4_packet *trace_pkt_v4 = data + sizeof(*meta); + unsigned char *raw_pkt = data + sizeof(*meta); + struct test_ctx_s *tst_ctx = ctx; + int duration = 0; if (CHECK(size < sizeof(pkt_v4) + sizeof(*meta), "check_size", "size %u < %zu\n", @@ -25,25 +34,90 @@ static void on_sample(void *ctx, int cpu, void *data, __u32 size) "meta->ifindex = %d\n", meta->ifindex)) return; - if (CHECK(meta->pkt_len != sizeof(pkt_v4), "check_meta_pkt_len", - "meta->pkt_len = %zd\n", sizeof(pkt_v4))) + if (CHECK(meta->pkt_len != tst_ctx->pkt_size, "check_meta_pkt_len", + "meta->pkt_len = %d\n", tst_ctx->pkt_size)) return; if (CHECK(memcmp(trace_pkt_v4, &pkt_v4, sizeof(pkt_v4)), "check_packet_content", "content not the same\n")) return; - *(bool *)ctx = true; + if (meta->pkt_len > sizeof(pkt_v4)) { + for (int i = 0; i < (meta->pkt_len - sizeof(pkt_v4)); i++) { + if (raw_pkt[i + sizeof(pkt_v4)] != (unsigned char)i) { + CHECK(true, "check_packet_content", + "byte %zu does not match %u != %u\n", + i + sizeof(pkt_v4), + raw_pkt[i + sizeof(pkt_v4)], + (unsigned char)i); + break; + } + } + } + + tst_ctx->passed = true; } -void test_xdp_bpf2bpf(void) +static int run_xdp_bpf2bpf_pkt_size(int pkt_fd, struct perf_buffer *pb, + struct test_xdp_bpf2bpf *ftrace_skel, + int pkt_size) { __u32 duration = 0, retval, size; - char buf[128]; + unsigned char buf_in[9000]; + unsigned char buf[9000]; + int err; + + if (pkt_size > sizeof(buf_in) || pkt_size < sizeof(pkt_v4)) + return -EINVAL; + + test_ctx.passed = false; + test_ctx.pkt_size = pkt_size; + + memcpy(buf_in, &pkt_v4, sizeof(pkt_v4)); + if (pkt_size > sizeof(pkt_v4)) { + for (int i = 0; i < (pkt_size - sizeof(pkt_v4)); i++) + buf_in[i + sizeof(pkt_v4)] = i; + } + + /* Run test program */ + err = bpf_prog_test_run(pkt_fd, 1, buf_in, pkt_size, + buf, &size, &retval, &duration); + + if (CHECK(err || retval != XDP_PASS || size != pkt_size, + "ipv4", "err %d errno %d retval %d size %d\n", + err, errno, retval, size)) + return -1; + + /* Make sure bpf_xdp_output() was triggered and it sent the expected + * data to the perf ring buffer. + */ + err = perf_buffer__poll(pb, 100); + if (CHECK(err <= 0, "perf_buffer__poll", "err %d\n", err)) + return -1; + + if (CHECK_FAIL(!test_ctx.passed)) + return -1; + + /* Verify test results */ + if (CHECK(ftrace_skel->bss->test_result_fentry != if_nametoindex("lo"), + "result", "fentry failed err %llu\n", + ftrace_skel->bss->test_result_fentry)) + return -1; + + if (CHECK(ftrace_skel->bss->test_result_fexit != XDP_PASS, "result", + "fexit failed err %llu\n", + ftrace_skel->bss->test_result_fexit)) + return -1; + + return 0; +} + +void test_xdp_bpf2bpf(void) +{ int err, pkt_fd, map_fd; - bool passed = false; - struct iphdr *iph = (void *)buf + sizeof(struct ethhdr); - struct iptnl_info value4 = {.family = AF_INET}; + __u32 duration = 0; + int pkt_sizes[] = {sizeof(pkt_v4), 1024, 4100, 8200}; + struct iptnl_info value4 = {.family = AF_INET6}; struct test_xdp *pkt_skel = NULL; struct test_xdp_bpf2bpf *ftrace_skel = NULL; struct vip key4 = {.protocol = 6, .family = AF_INET}; @@ -87,40 +161,15 @@ void test_xdp_bpf2bpf(void) /* Set up perf buffer */ pb_opts.sample_cb = on_sample; - pb_opts.ctx = &passed; + pb_opts.ctx = &test_ctx; pb = perf_buffer__new(bpf_map__fd(ftrace_skel->maps.perf_buf_map), - 1, &pb_opts); + 8, &pb_opts); if (!ASSERT_OK_PTR(pb, "perf_buf__new")) goto out; - /* Run test program */ - err = bpf_prog_test_run(pkt_fd, 1, &pkt_v4, sizeof(pkt_v4), - buf, &size, &retval, &duration); - - if (CHECK(err || retval != XDP_TX || size != 74 || - iph->protocol != IPPROTO_IPIP, "ipv4", - "err %d errno %d retval %d size %d\n", - err, errno, retval, size)) - goto out; - - /* Make sure bpf_xdp_output() was triggered and it sent the expected - * data to the perf ring buffer. - */ - err = perf_buffer__poll(pb, 100); - if (CHECK(err < 0, "perf_buffer__poll", "err %d\n", err)) - goto out; - - CHECK_FAIL(!passed); - - /* Verify test results */ - if (CHECK(ftrace_skel->bss->test_result_fentry != if_nametoindex("lo"), - "result", "fentry failed err %llu\n", - ftrace_skel->bss->test_result_fentry)) - goto out; - - CHECK(ftrace_skel->bss->test_result_fexit != XDP_TX, "result", - "fexit failed err %llu\n", ftrace_skel->bss->test_result_fexit); - + for (int i = 0; i < ARRAY_SIZE(pkt_sizes); i++) + run_xdp_bpf2bpf_pkt_size(pkt_fd, pb, ftrace_skel, + pkt_sizes[i]); out: if (pb) perf_buffer__free(pb); diff --git a/tools/testing/selftests/bpf/progs/test_xdp_bpf2bpf.c b/tools/testing/selftests/bpf/progs/test_xdp_bpf2bpf.c index a038e827f850..902b54190377 100644 --- a/tools/testing/selftests/bpf/progs/test_xdp_bpf2bpf.c +++ b/tools/testing/selftests/bpf/progs/test_xdp_bpf2bpf.c @@ -49,7 +49,7 @@ int BPF_PROG(trace_on_entry, struct xdp_buff *xdp) void *data = (void *)(long)xdp->data; meta.ifindex = xdp->rxq->dev->ifindex; - meta.pkt_len = data_end - data; + meta.pkt_len = bpf_xdp_get_buff_len((struct xdp_md *)xdp); bpf_xdp_output(xdp, &perf_buf_map, ((__u64) meta.pkt_len << 32) | BPF_F_CURRENT_CPU, From patchwork Mon Jun 14 12:49:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 12318879 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A543BC48BE6 for ; Mon, 14 Jun 2021 12:51:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8FE436109F for ; Mon, 14 Jun 2021 12:51:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234088AbhFNMxw (ORCPT ); Mon, 14 Jun 2021 08:53:52 -0400 Received: from mail.kernel.org ([198.145.29.99]:44052 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233618AbhFNMxY (ORCPT ); Mon, 14 Jun 2021 08:53:24 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 0FCC8613F0; Mon, 14 Jun 2021 12:51:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1623675082; bh=Ed+fyIMHjZtJZEjh2p1GjsUYTWQb6LdEDcd5EQ50HA0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DLPwnatdQ/JNcrO8+tCxin1xD6OuoSmRcjsc8DQ5OZHAVSkNR6EFkhdNWn4F87+IU 0qz8/9J9Oqb6OsQt/aoKpiPTzZTB9V9bAtlYt1bpgCvXSrxYnsx2miW7Eqia2PgJ8/ 2tnHPxG64v6L85UWTR8dShzmnt5PSUuD1PYPZGerBP5OT5nbXjnp6bD2HQwVXCTUxG iFcF8BVDe9Rvff8ZlAvqCeLMwKEZ6+cOGip3cGZa1euVi3vGaiNrf4nmQmSzs2mmYw F3mswhonyKM+63Wq9ubxjBgCLYogVGBrAhrXes/76hux9dQDiRVRK3PRwxSZca08uM LMgJmTnPmbLZQ== From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, sameehj@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, echaudro@redhat.com, jasowang@redhat.com, alexander.duyck@gmail.com, saeed@kernel.org, maciej.fijalkowski@intel.com, magnus.karlsson@intel.com, tirthendu.sarkar@intel.com Subject: [PATCH v9 bpf-next 11/14] bpf: move user_size out of bpf_test_init Date: Mon, 14 Jun 2021 14:49:49 +0200 Message-Id: X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Rely on data_size_in in bpf_test_init routine signature. This is a preliminary patch to introduce xdp multi-buff selftest Signed-off-by: Lorenzo Bianconi --- net/bpf/test_run.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index aa47af349ba8..d3252a234678 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -245,11 +245,10 @@ bool bpf_prog_test_check_kfunc_call(u32 kfunc_id) return btf_id_set_contains(&test_sk_kfunc_ids, kfunc_id); } -static void *bpf_test_init(const union bpf_attr *kattr, u32 size, - u32 headroom, u32 tailroom) +static void *bpf_test_init(const union bpf_attr *kattr, u32 user_size, + u32 size, u32 headroom, u32 tailroom) { void __user *data_in = u64_to_user_ptr(kattr->test.data_in); - u32 user_size = kattr->test.data_size_in; void *data; if (size < ETH_HLEN || size > PAGE_SIZE - headroom - tailroom) @@ -570,7 +569,8 @@ int bpf_prog_test_run_skb(struct bpf_prog *prog, const union bpf_attr *kattr, if (kattr->test.flags || kattr->test.cpu) return -EINVAL; - data = bpf_test_init(kattr, size, NET_SKB_PAD + NET_IP_ALIGN, + data = bpf_test_init(kattr, kattr->test.data_size_in, + size, NET_SKB_PAD + NET_IP_ALIGN, SKB_DATA_ALIGN(sizeof(struct skb_shared_info))); if (IS_ERR(data)) return PTR_ERR(data); @@ -707,7 +707,8 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr, /* XDP have extra tailroom as (most) drivers use full page */ max_data_sz = 4096 - headroom - tailroom; - data = bpf_test_init(kattr, max_data_sz, headroom, tailroom); + data = bpf_test_init(kattr, kattr->test.data_size_in, + max_data_sz, headroom, tailroom); if (IS_ERR(data)) return PTR_ERR(data); @@ -769,7 +770,7 @@ int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog, if (size < ETH_HLEN) return -EINVAL; - data = bpf_test_init(kattr, size, 0, 0); + data = bpf_test_init(kattr, kattr->test.data_size_in, size, 0, 0); if (IS_ERR(data)) return PTR_ERR(data); From patchwork Mon Jun 14 12:49:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 12318881 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68E11C2B9F4 for ; Mon, 14 Jun 2021 12:52:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5524861166 for ; Mon, 14 Jun 2021 12:52:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233762AbhFNMyC (ORCPT ); Mon, 14 Jun 2021 08:54:02 -0400 Received: from mail.kernel.org ([198.145.29.99]:44122 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233891AbhFNMx2 (ORCPT ); Mon, 14 Jun 2021 08:53:28 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 0DB64613CC; Mon, 14 Jun 2021 12:51:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1623675086; bh=brAq3Odxa6VtHUUCbhQeAN1XnvB54uYZHLTUxmz6VWM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UU6EARrnsIJTXS2+UH8KEFedo2ziR31ZUsC5LwgRPyh1d0nS3Er3vK5tx4a+VRu2v Uv0TziB9ubd/p4avxO3lr0xtGJaBDJszSqBjTXbkIkhVPWnLezhPB7e/5XfWNKeLcf qauUC5TgI2g+6e62xZEz2szsNwkCzkYOeRvH91ZEkt2CTcW95ZO5qJ6VfWFHBsRwbb GJPkXLa3FFphFzUsFR1eutbCE9GIi56001GAcjzJmcgnwbgJNmrEsBPyasB3/u5Pa5 yKfOKv9zrb7PZZyZ/stll7HOUxX5+lRZnsCYFQFH17BB1m3SdJFuI8v8sFaVQn2HOC Oy6Muam9k9A8Q== From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, sameehj@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, echaudro@redhat.com, jasowang@redhat.com, alexander.duyck@gmail.com, saeed@kernel.org, maciej.fijalkowski@intel.com, magnus.karlsson@intel.com, tirthendu.sarkar@intel.com Subject: [PATCH v9 bpf-next 12/14] bpf: introduce multibuff support to bpf_prog_test_run_xdp() Date: Mon, 14 Jun 2021 14:49:50 +0200 Message-Id: <88626eaaf04448077be9d2599eae526df37290aa.1623674025.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Introduce the capability to allocate a xdp multi-buff in bpf_prog_test_run_xdp routine. This is a preliminary patch to introduce the selftests for new xdp multi-buff ebpf helpers Signed-off-by: Lorenzo Bianconi --- net/bpf/test_run.c | 52 +++++++++++++++++++++++++++++++++++++++------- 1 file changed, 44 insertions(+), 8 deletions(-) diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index d3252a234678..5e2c8478ae18 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -692,23 +692,22 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr, { u32 tailroom = SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); u32 headroom = XDP_PACKET_HEADROOM; - u32 size = kattr->test.data_size_in; u32 repeat = kattr->test.repeat; struct netdev_rx_queue *rxqueue; + struct skb_shared_info *sinfo; struct xdp_buff xdp = {}; + u32 max_data_sz, size; u32 retval, duration; - u32 max_data_sz; + int i, ret; void *data; - int ret; if (kattr->test.ctx_in || kattr->test.ctx_out) return -EINVAL; - /* XDP have extra tailroom as (most) drivers use full page */ max_data_sz = 4096 - headroom - tailroom; + size = min_t(u32, kattr->test.data_size_in, max_data_sz); - data = bpf_test_init(kattr, kattr->test.data_size_in, - max_data_sz, headroom, tailroom); + data = bpf_test_init(kattr, size, max_data_sz, headroom, tailroom); if (IS_ERR(data)) return PTR_ERR(data); @@ -717,16 +716,53 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr, &rxqueue->xdp_rxq); xdp_prepare_buff(&xdp, data, headroom, size, true); + sinfo = xdp_get_shared_info_from_buff(&xdp); + if (unlikely(kattr->test.data_size_in > size)) { + void __user *data_in = u64_to_user_ptr(kattr->test.data_in); + + while (size < kattr->test.data_size_in) { + struct page *page; + skb_frag_t *frag; + int data_len; + + page = alloc_page(GFP_KERNEL); + if (!page) { + ret = -ENOMEM; + goto out; + } + + frag = &sinfo->frags[sinfo->nr_frags++]; + __skb_frag_set_page(frag, page); + + data_len = min_t(int, kattr->test.data_size_in - size, + PAGE_SIZE); + skb_frag_size_set(frag, data_len); + + if (copy_from_user(page_address(page), data_in + size, + data_len)) { + ret = -EFAULT; + goto out; + } + sinfo->data_len += data_len; + size += data_len; + } + xdp_buff_set_mb(&xdp); + } + bpf_prog_change_xdp(NULL, prog); ret = bpf_test_run(prog, &xdp, repeat, &retval, &duration, true); if (ret) goto out; - if (xdp.data != data + headroom || xdp.data_end != xdp.data + size) - size = xdp.data_end - xdp.data; + + size = xdp.data_end - xdp.data + sinfo->data_len; ret = bpf_test_finish(kattr, uattr, xdp.data, size, retval, duration); + out: bpf_prog_change_xdp(prog, NULL); + for (i = 0; i < sinfo->nr_frags; i++) + __free_page(skb_frag_page(&sinfo->frags[i])); kfree(data); + return ret; } From patchwork Mon Jun 14 12:49:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 12318883 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E973C4743C for ; Mon, 14 Jun 2021 12:52:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3CCE5613EF for ; Mon, 14 Jun 2021 12:52:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233902AbhFNMyE (ORCPT ); Mon, 14 Jun 2021 08:54:04 -0400 Received: from mail.kernel.org ([198.145.29.99]:44382 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233954AbhFNMxd (ORCPT ); Mon, 14 Jun 2021 08:53:33 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 2D0A2613B2; Mon, 14 Jun 2021 12:51:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1623675090; bh=Qot+eotTh2oiGlHvSN2SqVnGW+t603L5OMXwTJ6csiM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Fu4GG6XORvyK4H9VUSr+omdlezH4G3u+HvepVhYBaKHxzu06EnwkbRSX9k+leJWrX 7UqkHSkMYxy4zSspNVXXGD4m7Tf3uvhtXbZRTDIUWJK/heFmgh79QZvFKujk5yDT1E 1NVtSDDJzSbx2dZiJEoTmQHSSg1RLC9k8K03CYaQwKkghF2TwMNtWguxkHuPNQuNLz QQ9R7RlRCCg1aQPj7KyPV9+wr20x8ywbCAeOWJpXT/S4C3jTqab0ZoTy1xwA8gtulG jMX8RID3pSu88M4focJF7A4yuSw4mcJiQhKa18HHMAjTM/t1hmFVa7/tqFRZ7VjfVU 4c9T3thjufdtQ== From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, sameehj@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, echaudro@redhat.com, jasowang@redhat.com, alexander.duyck@gmail.com, saeed@kernel.org, maciej.fijalkowski@intel.com, magnus.karlsson@intel.com, tirthendu.sarkar@intel.com Subject: [PATCH v9 bpf-next 13/14] bpf: test_run: add xdp_shared_info pointer in bpf_test_finish signature Date: Mon, 14 Jun 2021 14:49:51 +0200 Message-Id: <38b246295b3fd18556f168434e7362182417f2ad.1623674025.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net introduce xdp_shared_info pointer in bpf_test_finish signature in order to copy back paged data from a xdp multi-buff frame to userspace buffer Signed-off-by: Lorenzo Bianconi --- net/bpf/test_run.c | 47 ++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 39 insertions(+), 8 deletions(-) diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index 5e2c8478ae18..e6b15e3d2086 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -128,7 +128,8 @@ static int bpf_test_run(struct bpf_prog *prog, void *ctx, u32 repeat, static int bpf_test_finish(const union bpf_attr *kattr, union bpf_attr __user *uattr, const void *data, - u32 size, u32 retval, u32 duration) + struct skb_shared_info *sinfo, u32 size, + u32 retval, u32 duration) { void __user *data_out = u64_to_user_ptr(kattr->test.data_out); int err = -EFAULT; @@ -143,8 +144,36 @@ static int bpf_test_finish(const union bpf_attr *kattr, err = -ENOSPC; } - if (data_out && copy_to_user(data_out, data, copy_size)) - goto out; + if (data_out) { + int len = sinfo ? copy_size - sinfo->data_len : copy_size; + + if (copy_to_user(data_out, data, len)) + goto out; + + if (sinfo) { + int i, offset = len, data_len; + + for (i = 0; i < sinfo->nr_frags; i++) { + skb_frag_t *frag = &sinfo->frags[i]; + + if (offset >= copy_size) { + err = -ENOSPC; + break; + } + + data_len = min_t(int, copy_size - offset, + skb_frag_size(frag)); + + if (copy_to_user(data_out + offset, + skb_frag_address(frag), + data_len)) + goto out; + + offset += data_len; + } + } + } + if (copy_to_user(&uattr->test.data_size_out, &size, sizeof(size))) goto out; if (copy_to_user(&uattr->test.retval, &retval, sizeof(retval))) @@ -673,7 +702,8 @@ int bpf_prog_test_run_skb(struct bpf_prog *prog, const union bpf_attr *kattr, /* bpf program can never convert linear skb to non-linear */ if (WARN_ON_ONCE(skb_is_nonlinear(skb))) size = skb_headlen(skb); - ret = bpf_test_finish(kattr, uattr, skb->data, size, retval, duration); + ret = bpf_test_finish(kattr, uattr, skb->data, NULL, size, retval, + duration); if (!ret) ret = bpf_ctx_finish(kattr, uattr, ctx, sizeof(struct __sk_buff)); @@ -755,7 +785,8 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr, goto out; size = xdp.data_end - xdp.data + sinfo->data_len; - ret = bpf_test_finish(kattr, uattr, xdp.data, size, retval, duration); + ret = bpf_test_finish(kattr, uattr, xdp.data, sinfo, size, retval, + duration); out: bpf_prog_change_xdp(prog, NULL); @@ -841,8 +872,8 @@ int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog, if (ret < 0) goto out; - ret = bpf_test_finish(kattr, uattr, &flow_keys, sizeof(flow_keys), - retval, duration); + ret = bpf_test_finish(kattr, uattr, &flow_keys, NULL, + sizeof(flow_keys), retval, duration); if (!ret) ret = bpf_ctx_finish(kattr, uattr, user_ctx, sizeof(struct bpf_flow_keys)); @@ -946,7 +977,7 @@ int bpf_prog_test_run_sk_lookup(struct bpf_prog *prog, const union bpf_attr *kat user_ctx->cookie = sock_gen_cookie(ctx.selected_sk); } - ret = bpf_test_finish(kattr, uattr, NULL, 0, retval, duration); + ret = bpf_test_finish(kattr, uattr, NULL, NULL, 0, retval, duration); if (!ret) ret = bpf_ctx_finish(kattr, uattr, user_ctx, sizeof(*user_ctx)); From patchwork Mon Jun 14 12:49:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 12318885 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A82A8C2B9F4 for ; Mon, 14 Jun 2021 12:52:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 926E2613EF for ; Mon, 14 Jun 2021 12:52:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233956AbhFNMyN (ORCPT ); Mon, 14 Jun 2021 08:54:13 -0400 Received: from mail.kernel.org ([198.145.29.99]:44464 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233982AbhFNMxi (ORCPT ); Mon, 14 Jun 2021 08:53:38 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 4A7B8613F5; Mon, 14 Jun 2021 12:51:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1623675095; bh=Aot3w0GutNoVd9iy8kuxhO3pM+hZMqgbmdI5I48y4YA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XfvJ4UklYZqQpTZhCWZMOfya1AaZgp+OtPk3G7JZbhlm+K0IuMskhoobDVCXg5kYU Uv6vzHqjQweDe1q1gLo6PNMUhQSpW8GdkQRxuYGyQ2t6YcAmJMC1QSf97BogaDUgxB nQ9RKRpYlrdFaOvYezwusLzkGKg0XSrcalsI7nHPMboKbMfZy1KV7qfKxa7J+B+GPx Sx839VqLx8qihprND6JmqbvrTukxGk9rJ1xbf/AlnqCeRMCRAyZc0RkZyW8st8JwtP MRvprP9R4aSRH0np/F7m1uah2+nRsBHU+WyMNl2NUgonhKWlHUTNPypdIuUWD+7XtC YxaTNRNLTPhcQ== From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: lorenzo.bianconi@redhat.com, davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, sameehj@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, echaudro@redhat.com, jasowang@redhat.com, alexander.duyck@gmail.com, saeed@kernel.org, maciej.fijalkowski@intel.com, magnus.karlsson@intel.com, tirthendu.sarkar@intel.com Subject: [PATCH v9 bpf-next 14/14] bpf: update xdp_adjust_tail selftest to include multi-buffer Date: Mon, 14 Jun 2021 14:49:52 +0200 Message-Id: <34f4d322ac5adc13915520349bf8b556982d904e.1623674025.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Eelco Chaudron This change adds test cases for the multi-buffer scenarios when shrinking and growing. Signed-off-by: Eelco Chaudron Signed-off-by: Lorenzo Bianconi --- .../bpf/prog_tests/xdp_adjust_tail.c | 105 ++++++++++++++++++ .../bpf/progs/test_xdp_adjust_tail_grow.c | 10 +- .../bpf/progs/test_xdp_adjust_tail_shrink.c | 32 +++++- 3 files changed, 140 insertions(+), 7 deletions(-) diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_adjust_tail.c b/tools/testing/selftests/bpf/prog_tests/xdp_adjust_tail.c index d5c98f2cb12f..b936beaba797 100644 --- a/tools/testing/selftests/bpf/prog_tests/xdp_adjust_tail.c +++ b/tools/testing/selftests/bpf/prog_tests/xdp_adjust_tail.c @@ -130,6 +130,107 @@ void test_xdp_adjust_tail_grow2(void) bpf_object__close(obj); } +void test_xdp_adjust_mb_tail_shrink(void) +{ + const char *file = "./test_xdp_adjust_tail_shrink.o"; + __u32 duration, retval, size, exp_size; + struct bpf_object *obj; + static char buf[9000]; + int err, prog_fd; + + /* For the individual test cases, the first byte in the packet + * indicates which test will be run. + */ + + err = bpf_prog_load(file, BPF_PROG_TYPE_XDP, &obj, &prog_fd); + if (CHECK_FAIL(err)) + return; + + /* Test case removing 10 bytes from last frag, NOT freeing it */ + buf[0] = 0; + exp_size = sizeof(buf) - 10; + err = bpf_prog_test_run(prog_fd, 1, buf, sizeof(buf), + buf, &size, &retval, &duration); + + CHECK(err || retval != XDP_TX || size != exp_size, + "9k-10b", "err %d errno %d retval %d[%d] size %d[%u]\n", + err, errno, retval, XDP_TX, size, exp_size); + + /* Test case removing one of two pages, assuming 4K pages */ + buf[0] = 1; + exp_size = sizeof(buf) - 4100; + err = bpf_prog_test_run(prog_fd, 1, buf, sizeof(buf), + buf, &size, &retval, &duration); + + CHECK(err || retval != XDP_TX || size != exp_size, + "9k-1p", "err %d errno %d retval %d[%d] size %d[%u]\n", + err, errno, retval, XDP_TX, size, exp_size); + + /* Test case removing two pages resulting in a non mb xdp_buff */ + buf[0] = 2; + exp_size = sizeof(buf) - 8200; + err = bpf_prog_test_run(prog_fd, 1, buf, sizeof(buf), + buf, &size, &retval, &duration); + + CHECK(err || retval != XDP_TX || size != exp_size, + "9k-2p", "err %d errno %d retval %d[%d] size %d[%u]\n", + err, errno, retval, XDP_TX, size, exp_size); + + bpf_object__close(obj); +} + +void test_xdp_adjust_mb_tail_grow(void) +{ + const char *file = "./test_xdp_adjust_tail_grow.o"; + __u32 duration, retval, size, exp_size; + static char buf[16384]; + struct bpf_object *obj; + int err, i, prog_fd; + + err = bpf_prog_load(file, BPF_PROG_TYPE_XDP, &obj, &prog_fd); + if (CHECK_FAIL(err)) + return; + + /* Test case add 10 bytes to last frag */ + memset(buf, 1, sizeof(buf)); + size = 9000; + exp_size = size + 10; + err = bpf_prog_test_run(prog_fd, 1, buf, size, + buf, &size, &retval, &duration); + + CHECK(err || retval != XDP_TX || size != exp_size, + "9k+10b", "err %d retval %d[%d] size %d[%u]\n", + err, retval, XDP_TX, size, exp_size); + + for (i = 0; i < 9000; i++) + CHECK(buf[i] != 1, "9k+10b-old", + "Old data not all ok, offset %i is failing [%u]!\n", + i, buf[i]); + + for (i = 9000; i < 9010; i++) + CHECK(buf[i] != 0, "9k+10b-new", + "New data not all ok, offset %i is failing [%u]!\n", + i, buf[i]); + + for (i = 9010; i < sizeof(buf); i++) + CHECK(buf[i] != 1, "9k+10b-untouched", + "Unused data not all ok, offset %i is failing [%u]!\n", + i, buf[i]); + + /* Test a too large grow */ + memset(buf, 1, sizeof(buf)); + size = 9001; + exp_size = size; + err = bpf_prog_test_run(prog_fd, 1, buf, size, + buf, &size, &retval, &duration); + + CHECK(err || retval != XDP_DROP || size != exp_size, + "9k+10b", "err %d retval %d[%d] size %d[%u]\n", + err, retval, XDP_TX, size, exp_size); + + bpf_object__close(obj); +} + void test_xdp_adjust_tail(void) { if (test__start_subtest("xdp_adjust_tail_shrink")) @@ -138,4 +239,8 @@ void test_xdp_adjust_tail(void) test_xdp_adjust_tail_grow(); if (test__start_subtest("xdp_adjust_tail_grow2")) test_xdp_adjust_tail_grow2(); + if (test__start_subtest("xdp_adjust_mb_tail_shrink")) + test_xdp_adjust_mb_tail_shrink(); + if (test__start_subtest("xdp_adjust_mb_tail_grow")) + test_xdp_adjust_mb_tail_grow(); } diff --git a/tools/testing/selftests/bpf/progs/test_xdp_adjust_tail_grow.c b/tools/testing/selftests/bpf/progs/test_xdp_adjust_tail_grow.c index 3d66599eee2e..3d43defb0e00 100644 --- a/tools/testing/selftests/bpf/progs/test_xdp_adjust_tail_grow.c +++ b/tools/testing/selftests/bpf/progs/test_xdp_adjust_tail_grow.c @@ -7,11 +7,10 @@ int _xdp_adjust_tail_grow(struct xdp_md *xdp) { void *data_end = (void *)(long)xdp->data_end; void *data = (void *)(long)xdp->data; - unsigned int data_len; + int data_len = bpf_xdp_get_buff_len(xdp); int offset = 0; /* Data length determine test case */ - data_len = data_end - data; if (data_len == 54) { /* sizeof(pkt_v4) */ offset = 4096; /* test too large offset */ @@ -20,7 +19,12 @@ int _xdp_adjust_tail_grow(struct xdp_md *xdp) } else if (data_len == 64) { offset = 128; } else if (data_len == 128) { - offset = 4096 - 256 - 320 - data_len; /* Max tail grow 3520 */ + /* Max tail grow 3520 */ + offset = 4096 - 256 - 320 - data_len; + } else if (data_len == 9000) { + offset = 10; + } else if (data_len == 9001) { + offset = 4096; } else { return XDP_ABORTED; /* No matching test */ } diff --git a/tools/testing/selftests/bpf/progs/test_xdp_adjust_tail_shrink.c b/tools/testing/selftests/bpf/progs/test_xdp_adjust_tail_shrink.c index 22065a9cfb25..64177597ac29 100644 --- a/tools/testing/selftests/bpf/progs/test_xdp_adjust_tail_shrink.c +++ b/tools/testing/selftests/bpf/progs/test_xdp_adjust_tail_shrink.c @@ -14,14 +14,38 @@ int _version SEC("version") = 1; SEC("xdp_adjust_tail_shrink") int _xdp_adjust_tail_shrink(struct xdp_md *xdp) { - void *data_end = (void *)(long)xdp->data_end; - void *data = (void *)(long)xdp->data; + __u8 *data_end = (void *)(long)xdp->data_end; + __u8 *data = (void *)(long)xdp->data; int offset = 0; - if (data_end - data == 54) /* sizeof(pkt_v4) */ + switch (bpf_xdp_get_buff_len(xdp)) { + case 54: + /* sizeof(pkt_v4) */ offset = 256; /* shrink too much */ - else + break; + case 9000: + /* Multi-buffer test cases */ + if (data + 1 > data_end) + return XDP_DROP; + + switch (data[0]) { + case 0: + offset = 10; + break; + case 1: + offset = 4100; + break; + case 2: + offset = 8200; + break; + default: + return XDP_DROP; + } + break; + default: offset = 20; + break; + } if (bpf_xdp_adjust_tail(xdp, 0 - offset)) return XDP_DROP; return XDP_TX;