From patchwork Fri Mar 18 20:52:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12785926 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D12F7C433F5 for ; Fri, 18 Mar 2022 20:53:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240867AbiCRUyf (ORCPT ); Fri, 18 Mar 2022 16:54:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44588 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240529AbiCRUyd (ORCPT ); Fri, 18 Mar 2022 16:54:33 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 33AD1DEDB for ; Fri, 18 Mar 2022 13:53:13 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id B1B54B8257D for ; Fri, 18 Mar 2022 20:53:11 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1463DC340F0; Fri, 18 Mar 2022 20:53:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1647636790; bh=XSVDhxudbvNR5uNkCUmxGJ0COkZPxwbCpqD2EntUT+8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZdLPpqyUzXgdZm3gsiHWWaGM8murLYbKuMUuOxU40q/FWT/oa09OKONgLbxOHgVj2 n2Z+vAH6YkrifM7ZNV/nJSb1fgdARqZLp78SAqiT9OFYESZYdT0sq39nGQT5N/km9k GBkKWjp7XADUXNnE9cQt/P6dDgIXd3mTFu3RfsvDZP47Bm6kc0WHiq/n4/Itm4Qf40 F9ShoXibFNhEofZTy54N01JyrwJU81mvLRWNmbDFpt4nARkTIv/CWDBS8GxyDQuSn3 NkM1kWcIvTRctM3tHz9k6TwqApuhJ9o3gnCYBJyLPEVJjVUoTEIKawJ8dfeUARCDhU wkM1LTIBNlXLg== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Maxim Mikityanskiy , Tariq Toukan , Saeed Mahameed Subject: [net-next 01/15] net/mlx5e: Prepare non-linear legacy RQ for XDP multi buffer support Date: Fri, 18 Mar 2022 13:52:34 -0700 Message-Id: <20220318205248.33367-2-saeed@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220318205248.33367-1-saeed@kernel.org> References: <20220318205248.33367-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Maxim Mikityanskiy mlx5e_skb_from_cqe_nonlinear creates an xdp_buff first, putting the first fragment as the linear part, and the rest of fragments as fragments to struct skb_shared_info in the tailroom. Then it creates an SKB in place, based on the xdp_buff. The XDP program is not called in this commit yet. This commit contains no functional change, except the SKB is built over the whole frag_stride of the first fragment, instead of the minimal size required (headroom, data and skb_shared_info). Signed-off-by: Maxim Mikityanskiy Reviewed-by: Tariq Toukan Signed-off-by: Saeed Mahameed --- .../net/ethernet/mellanox/mlx5/core/en_rx.c | 75 +++++++++++++++---- 1 file changed, 61 insertions(+), 14 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c index 4b8699f39200..dd8ff62e1693 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -1567,45 +1567,92 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe, struct mlx5e_wqe_frag_info *wi, u32 cqe_bcnt) { struct mlx5e_rq_frag_info *frag_info = &rq->wqe.info.arr[0]; + struct mlx5e_wqe_frag_info *head_wi = wi; u16 rx_headroom = rq->buff.headroom; struct mlx5e_dma_info *di = wi->di; + struct skb_shared_info *sinfo; u32 frag_consumed_bytes; - u32 first_frag_size; + struct xdp_buff xdp; struct sk_buff *skb; + u32 truesize; void *va; va = page_address(di->page) + wi->offset; frag_consumed_bytes = min_t(u32, frag_info->frag_size, cqe_bcnt); - first_frag_size = MLX5_SKB_FRAG_SZ(rx_headroom + frag_consumed_bytes); dma_sync_single_range_for_cpu(rq->pdev, di->addr, wi->offset, - first_frag_size, DMA_FROM_DEVICE); + rq->buff.frame0_sz, DMA_FROM_DEVICE); net_prefetch(va + rx_headroom); - /* XDP is not supported in this configuration, as incoming packets - * might spread among multiple pages. - */ - skb = mlx5e_build_linear_skb(rq, va, first_frag_size, rx_headroom, - frag_consumed_bytes, 0); - if (unlikely(!skb)) - return NULL; - - page_ref_inc(di->page); + mlx5e_fill_xdp_buff(rq, va, rx_headroom, frag_consumed_bytes, &xdp); + sinfo = xdp_get_shared_info_from_buff(&xdp); + truesize = 0; cqe_bcnt -= frag_consumed_bytes; frag_info++; wi++; while (cqe_bcnt) { + skb_frag_t *frag; + + di = wi->di; + frag_consumed_bytes = min_t(u32, frag_info->frag_size, cqe_bcnt); - mlx5e_add_skb_frag(rq, skb, wi->di, wi->offset, - frag_consumed_bytes, frag_info->frag_stride); + dma_sync_single_for_cpu(rq->pdev, di->addr + wi->offset, + frag_consumed_bytes, DMA_FROM_DEVICE); + + if (!xdp_buff_has_frags(&xdp)) { + /* Init on the first fragment to avoid cold cache access + * when possible. + */ + sinfo->nr_frags = 0; + sinfo->xdp_frags_size = 0; + xdp_buff_set_frags_flag(&xdp); + } + + frag = &sinfo->frags[sinfo->nr_frags++]; + __skb_frag_set_page(frag, di->page); + skb_frag_off_set(frag, wi->offset); + skb_frag_size_set(frag, frag_consumed_bytes); + + if (page_is_pfmemalloc(di->page)) + xdp_buff_set_frag_pfmemalloc(&xdp); + + sinfo->xdp_frags_size += frag_consumed_bytes; + truesize += frag_info->frag_stride; + cqe_bcnt -= frag_consumed_bytes; frag_info++; wi++; } + di = head_wi->di; + + skb = mlx5e_build_linear_skb(rq, xdp.data_hard_start, rq->buff.frame0_sz, + xdp.data - xdp.data_hard_start, + xdp.data_end - xdp.data, + xdp.data - xdp.data_meta); + if (unlikely(!skb)) + return NULL; + + page_ref_inc(di->page); + + if (unlikely(xdp_buff_has_frags(&xdp))) { + int i; + + /* sinfo->nr_frags is reset by build_skb, calculate again. */ + xdp_update_skb_shared_info(skb, wi - head_wi - 1, + sinfo->xdp_frags_size, truesize, + xdp_buff_is_frag_pfmemalloc(&xdp)); + + for (i = 0; i < sinfo->nr_frags; i++) { + skb_frag_t *frag = &sinfo->frags[i]; + + page_ref_inc(skb_frag_page(frag)); + } + } + return skb; } From patchwork Fri Mar 18 20:52:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12785930 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 077BEC4332F for ; Fri, 18 Mar 2022 20:53:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240874AbiCRUym (ORCPT ); Fri, 18 Mar 2022 16:54:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44594 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240825AbiCRUyd (ORCPT ); Fri, 18 Mar 2022 16:54:33 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 96A89DEE7 for ; Fri, 18 Mar 2022 13:53:13 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 326EFB8257F for ; Fri, 18 Mar 2022 20:53:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 894ECC340F4; Fri, 18 Mar 2022 20:53:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1647636790; bh=x3nZ+bCcab6JNcMR4e7NLvLL2CLLpt+d5R29q4bWxXA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ujS0Bj5rV7Stj2erMt4q7DOwZH+Wtbg6VCgwzND/saK34N0Sez75IuLdlS8CrfIWt B30Fi35HMtl92naC1o41gnAIvSGiTORVIjzH5qDAmNqukJRuvvOSgiNzpIS8wAHwP+ 9Q4uuhXQE/6OMWn0Ah4iNFXjafpr0oyZXaM+TD5Uhv7Fnr6g5SLW4Y8hbYfVWPvGxo q2SMsh1JdzgdMcrNKJtwseVkMajDRnGPLhtNSiBSinCsp1VfkzAltRUF9f+zsn5IVy hBbUky7cyDqFIpT3licAki3bY7+MfZRLY7l9IxKF0AFRtwocGPoY4qmW8uarL/8//B 0UUTrJadJYKBA== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Maxim Mikityanskiy , Tariq Toukan , Saeed Mahameed Subject: [net-next 02/15] net/mlx5e: Use fragments of the same size in non-linear legacy RQ with XDP Date: Fri, 18 Mar 2022 13:52:35 -0700 Message-Id: <20220318205248.33367-3-saeed@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220318205248.33367-1-saeed@kernel.org> References: <20220318205248.33367-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Maxim Mikityanskiy XDP multi buffer implementation in the kernel assumes that all fragments have the same size. bpf_xdp_frags_increase_tail uses this assumption to get the size of the last fragment, and __xdp_build_skb_from_frame uses it to calculate truesize as nr_frags * xdpf->frame_sz. The current implementation of mlx5e uses fragments of different size in non-linear legacy RQ. Specifically, the last fragment can be larger than the others. It's an optimization for packets smaller than MTU. This commit adapts mlx5e to the kernel limitations and makes it use fragments of the same size, in order to add support for XDP multi buffer. The change is applied only if XDP is active, otherwise the old optimization still applies. Signed-off-by: Maxim Mikityanskiy Reviewed-by: Tariq Toukan Signed-off-by: Saeed Mahameed --- .../ethernet/mellanox/mlx5/core/en/params.c | 28 +++++++++++++------ 1 file changed, 19 insertions(+), 9 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c index 5c4711be6fae..822fbb9b80e7 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c @@ -398,8 +398,12 @@ void mlx5e_build_create_cq_param(struct mlx5e_create_cq_param *ccp, struct mlx5e }; } -static int mlx5e_max_nonlinear_mtu(int first_frag_size, int frag_size) +static int mlx5e_max_nonlinear_mtu(int first_frag_size, int frag_size, bool xdp) { + if (xdp) + /* XDP requires all fragments to be of the same size. */ + return first_frag_size + (MLX5E_MAX_RX_FRAGS - 1) * frag_size; + /* Optimization for small packets: the last fragment is bigger than the others. */ return first_frag_size + (MLX5E_MAX_RX_FRAGS - 2) * frag_size + PAGE_SIZE; } @@ -438,12 +442,14 @@ static int mlx5e_build_rq_frags_info(struct mlx5_core_dev *mdev, headroom = mlx5e_get_linear_rq_headroom(params, xsk); first_frag_size_max = SKB_WITH_OVERHEAD(frag_size_max - headroom); - max_mtu = mlx5e_max_nonlinear_mtu(first_frag_size_max, frag_size_max); + max_mtu = mlx5e_max_nonlinear_mtu(first_frag_size_max, frag_size_max, + params->xdp_prog); if (byte_count > max_mtu) { frag_size_max = PAGE_SIZE; first_frag_size_max = SKB_WITH_OVERHEAD(frag_size_max - headroom); - max_mtu = mlx5e_max_nonlinear_mtu(first_frag_size_max, frag_size_max); + max_mtu = mlx5e_max_nonlinear_mtu(first_frag_size_max, frag_size_max, + params->xdp_prog); if (byte_count > max_mtu) { mlx5_core_err(mdev, "MTU %u is too big for non-linear legacy RQ (max %d)\n", params->sw_mtu, max_mtu); @@ -463,14 +469,18 @@ static int mlx5e_build_rq_frags_info(struct mlx5_core_dev *mdev, info->arr[i].frag_size = frag_size; buf_size += frag_size; - if (i == 0) { - /* Ensure that headroom and tailroom are included. */ - frag_size += headroom; - frag_size += SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); + if (params->xdp_prog) { + /* XDP multi buffer expects fragments of the same size. */ + info->arr[i].frag_stride = frag_size_max; + } else { + if (i == 0) { + /* Ensure that headroom and tailroom are included. */ + frag_size += headroom; + frag_size += SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); + } + info->arr[i].frag_stride = roundup_pow_of_two(frag_size); } - info->arr[i].frag_stride = roundup_pow_of_two(frag_size); - i++; } info->num_frags = i; From patchwork Fri Mar 18 20:52:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12785927 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8DFCFC433FE for ; Fri, 18 Mar 2022 20:53:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240870AbiCRUyh (ORCPT ); Fri, 18 Mar 2022 16:54:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44574 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237255AbiCRUyd (ORCPT ); Fri, 18 Mar 2022 16:54:33 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5CA05DEB7 for ; Fri, 18 Mar 2022 13:53:12 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id EB5F360CA0 for ; Fri, 18 Mar 2022 20:53:11 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0AB35C340F6; Fri, 18 Mar 2022 20:53:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1647636791; bh=+IMVoc+yvwHIGOVU+vdGvTPAxaD059KaRrFh1eNVn8M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=W7ZBwNGB1DvfC6wmavHOGzWFbTECMTTbCcUKHV9z+19XvJMZJwwaOmkB6K5ujuqt2 2lDQtDd6V17vL1mUnZV8X5qpFWNuv6LC9Xr0cG8OYAtmPLGYeVXd99hzwLTm1jpyzf cxfuE8tEuk3Y3UPPXcwSE+bxIDFvJQsVG9pSkFz8bVAKpMuAuwZS5RXILt1tCQ9COJ +IKLqwQlEYKz/a+bo1b8AcV3kiM+mM5qm6KWaeunjIGb0d6mcT13zvspOxiYmq/rVS tv7BrLRiC0yz3xNnfpPHnVpx9MQcZ0W7RyFazBM1veFfQPnz/f8ar4ybKZD4prm0Vn Ga8x/B7HIbqGw== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Maxim Mikityanskiy , Tariq Toukan , Saeed Mahameed Subject: [net-next 03/15] net/mlx5e: Use page-sized fragments with XDP multi buffer Date: Fri, 18 Mar 2022 13:52:36 -0700 Message-Id: <20220318205248.33367-4-saeed@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220318205248.33367-1-saeed@kernel.org> References: <20220318205248.33367-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Maxim Mikityanskiy The implementation of XDP in mlx5e assumes that the frame size is equal to the page size. Force this limitation in the non-linear mode for XDP multi buffer. Signed-off-by: Maxim Mikityanskiy Reviewed-by: Tariq Toukan Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en/params.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c index 822fbb9b80e7..9646867872c1 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c @@ -444,7 +444,7 @@ static int mlx5e_build_rq_frags_info(struct mlx5_core_dev *mdev, max_mtu = mlx5e_max_nonlinear_mtu(first_frag_size_max, frag_size_max, params->xdp_prog); - if (byte_count > max_mtu) { + if (byte_count > max_mtu || params->xdp_prog) { frag_size_max = PAGE_SIZE; first_frag_size_max = SKB_WITH_OVERHEAD(frag_size_max - headroom); From patchwork Fri Mar 18 20:52:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12785925 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D88B1C433F5 for ; Fri, 18 Mar 2022 20:53:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240855AbiCRUye (ORCPT ); Fri, 18 Mar 2022 16:54:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44586 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239884AbiCRUyd (ORCPT ); Fri, 18 Mar 2022 16:54:33 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DAF76DED7 for ; Fri, 18 Mar 2022 13:53:12 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 6982660CA3 for ; Fri, 18 Mar 2022 20:53:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 83CC9C340F5; Fri, 18 Mar 2022 20:53:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1647636791; bh=ktHGJYqeHF5Eb4Xh3WJsdoarGC4QXdEEH+MgGK9Cp8U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Gao9FpRHOAguJZjRsQsD2eZoNPeHddhSZ576vUiVHM5j+DxnPRxwCWrkrVqxE7iaG BRiCYmYLtC+kusruAZAVLHnDJphE44Tl4oyhkOupJM2Zl3V7yYhHpdGznVjizOaBs0 ZhoTyxcXlomyKqaKBI6oJlpCMRf9jcTvcwGO+ANf6N2coardKzeYHXoEaohz62LzYu hqMDF827KjP0DyQg+8NvdI8DuzJuvT2dvehct/JavyUKK6VMVvMR3pW5vbYfMpFu2N U471uCSWbFlg08qIOrGQgGzYFPqammAVKxN4ZDaUlBNTUkKOFeJuh++DErqhN9p5oP vDe6wwA4ItVXg== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Maxim Mikityanskiy , Tariq Toukan , Saeed Mahameed Subject: [net-next 04/15] net/mlx5e: Add XDP multi buffer support to the non-linear legacy RQ Date: Fri, 18 Mar 2022 13:52:37 -0700 Message-Id: <20220318205248.33367-5-saeed@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220318205248.33367-1-saeed@kernel.org> References: <20220318205248.33367-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Maxim Mikityanskiy This commit adds XDP multi buffer support to the RX path in the non-linear legacy RQ mode. mlx5e_xdp_handle is called from mlx5e_skb_from_cqe_nonlinear. XDP_TX action for fragmented XDP frames is not yet supported and blocked. Signed-off-by: Maxim Mikityanskiy Reviewed-by: Tariq Toukan Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c | 5 +++++ drivers/net/ethernet/mellanox/mlx5/core/en_rx.c | 13 +++++++++++++ 2 files changed, 18 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c index 6aa77f0e094e..3a837030e96e 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c @@ -68,6 +68,11 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, if (unlikely(!xdpf)) return false; + if (unlikely(xdp_frame_has_frags(xdpf))) { + xdp_return_frame(xdpf); + return false; + } + xdptxd.data = xdpf->data; xdptxd.len = xdpf->len; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c index dd8ff62e1693..e9ad5b3a30ed 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -1572,6 +1572,7 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe, struct mlx5e_dma_info *di = wi->di; struct skb_shared_info *sinfo; u32 frag_consumed_bytes; + struct bpf_prog *prog; struct xdp_buff xdp; struct sk_buff *skb; u32 truesize; @@ -1582,6 +1583,7 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe, dma_sync_single_range_for_cpu(rq->pdev, di->addr, wi->offset, rq->buff.frame0_sz, DMA_FROM_DEVICE); + net_prefetchw(va); /* xdp_frame data area */ net_prefetch(va + rx_headroom); mlx5e_fill_xdp_buff(rq, va, rx_headroom, frag_consumed_bytes, &xdp); @@ -1629,6 +1631,17 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe, di = head_wi->di; + prog = rcu_dereference(rq->xdp_prog); + if (prog && mlx5e_xdp_handle(rq, di, prog, &xdp)) { + if (test_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) { + int i; + + for (i = wi - head_wi; i < rq->wqe.info.num_frags; i++) + mlx5e_put_rx_frag(rq, &head_wi[i], true); + } + return NULL; /* page/packet was consumed by XDP */ + } + skb = mlx5e_build_linear_skb(rq, xdp.data_hard_start, rq->buff.frame0_sz, xdp.data - xdp.data_hard_start, xdp.data_end - xdp.data, From patchwork Fri Mar 18 20:52:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12785931 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4D07C433EF for ; Fri, 18 Mar 2022 20:53:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240881AbiCRUyn (ORCPT ); Fri, 18 Mar 2022 16:54:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44680 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240858AbiCRUyf (ORCPT ); Fri, 18 Mar 2022 16:54:35 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EC238DEF2 for ; Fri, 18 Mar 2022 13:53:14 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 849B4B8257B for ; Fri, 18 Mar 2022 20:53:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0845DC340EF; Fri, 18 Mar 2022 20:53:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1647636792; bh=7T4hQGXhAzV7Iws09npCV2601ptzOLw8gnHFzZO3UPo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=isXAxtL4OHuXhJBFiazYrU+l6NPgsiGIUB0AcEFgyAELiqScx2b3DJSJZIMe54HF5 YcxKP/8pcOUaMk/culBNnwyf07zwdEL1ceiA3TSbT+o4vNeEO1DoKg3w5NH+UTse2J cG7Vxchzj1L0EC9vLtLe9dLnaDcBmLhKFhnUVo5dvdvrmSvZvJV4tRJEnsupM0TA/i IK0DLPhGTpfljtnwqF2uOMi58EOs+wh4eajPvqRwJgbAyYTC4qX83YcbiyommWuR1T pfZU3xSHN65p6o2/W0KIkkzPyvQLndHzYnhRUoyn1MBCeXAGIjDjGxb6roetcdx6Qe mPqRhhqkJCeOA== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Maxim Mikityanskiy , Saeed Mahameed Subject: [net-next 05/15] net/mlx5e: Store DMA address inside struct page Date: Fri, 18 Mar 2022 13:52:38 -0700 Message-Id: <20220318205248.33367-6-saeed@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220318205248.33367-1-saeed@kernel.org> References: <20220318205248.33367-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Maxim Mikityanskiy Use page_pool_set_dma_addr() to store the DMA address of a page inside struct page, in order to avoid passing struct mlx5e_dma_info to XDP handlers. Previously, struct mlx5e_dma_info was used to pass both the DMA address and the page, and it worked well for the single-fragment case. When XDP multi buffer is in use, and a fragmented xdp_frame has to be transmitted, the driver needs to know the DMA addresses of fragments, however, the array of fragments in struct skb_shared_info doesn't contain them. In order to pass the DMA addresses, the driver puts them into struct page itself, which is accessible from the array of fragments in struct skb_shared_info. The existing XDP handlers are modified to remove the dependency on struct mlx5e_dma_info. Signed-off-by: Maxim Mikityanskiy Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en.h | 2 +- .../net/ethernet/mellanox/mlx5/core/en/txrx.h | 6 +-- .../net/ethernet/mellanox/mlx5/core/en/xdp.c | 14 +++---- .../net/ethernet/mellanox/mlx5/core/en/xdp.h | 2 +- .../net/ethernet/mellanox/mlx5/core/en_main.c | 2 +- .../net/ethernet/mellanox/mlx5/core/en_rx.c | 40 ++++++++++--------- 6 files changed, 33 insertions(+), 33 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h index 2704c7537481..f5b2449fa15a 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -515,7 +515,7 @@ struct mlx5e_xdp_info { } frame; struct { struct mlx5e_rq *rq; - struct mlx5e_dma_info di; + struct page *page; } page; }; }; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h index 210d23bf3701..c208ea307bff 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h @@ -44,10 +44,8 @@ int mlx5e_napi_poll(struct napi_struct *napi, int budget); int mlx5e_poll_ico_cq(struct mlx5e_cq *cq); /* RX */ -void mlx5e_page_dma_unmap(struct mlx5e_rq *rq, struct mlx5e_dma_info *dma_info); -void mlx5e_page_release_dynamic(struct mlx5e_rq *rq, - struct mlx5e_dma_info *dma_info, - bool recycle); +void mlx5e_page_dma_unmap(struct mlx5e_rq *rq, struct page *page); +void mlx5e_page_release_dynamic(struct mlx5e_rq *rq, struct page *page, bool recycle); INDIRECT_CALLABLE_DECLARE(bool mlx5e_post_rx_wqes(struct mlx5e_rq *rq)); INDIRECT_CALLABLE_DECLARE(bool mlx5e_post_rx_mpwqes(struct mlx5e_rq *rq)); int mlx5e_poll_rx_cq(struct mlx5e_cq *cq, int budget); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c index 3a837030e96e..91dd5c59657b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c @@ -57,7 +57,7 @@ int mlx5e_xdp_max_mtu(struct mlx5e_params *params, struct mlx5e_xsk_param *xsk) static inline bool mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, - struct mlx5e_dma_info *di, struct xdp_buff *xdp) + struct page *page, struct xdp_buff *xdp) { struct mlx5e_xmit_data xdptxd; struct mlx5e_xdp_info xdpi; @@ -110,13 +110,13 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, xdpi.mode = MLX5E_XDP_XMIT_MODE_PAGE; - dma_addr = di->addr + (xdpf->data - (void *)xdpf); + dma_addr = page_pool_get_dma_addr(page) + (xdpf->data - (void *)xdpf); dma_sync_single_for_device(sq->pdev, dma_addr, xdptxd.len, DMA_TO_DEVICE); xdptxd.dma_addr = dma_addr; xdpi.page.rq = rq; - xdpi.page.di = *di; + xdpi.page.page = page; } return INDIRECT_CALL_2(sq->xmit_xdp_frame, mlx5e_xmit_xdp_frame_mpwqe, @@ -124,7 +124,7 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, } /* returns true if packet was consumed by xdp */ -bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct mlx5e_dma_info *di, +bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct page *page, struct bpf_prog *prog, struct xdp_buff *xdp) { u32 act; @@ -135,7 +135,7 @@ bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct mlx5e_dma_info *di, case XDP_PASS: return false; case XDP_TX: - if (unlikely(!mlx5e_xmit_xdp_buff(rq->xdpsq, rq, di, xdp))) + if (unlikely(!mlx5e_xmit_xdp_buff(rq->xdpsq, rq, page, xdp))) goto xdp_abort; __set_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags); /* non-atomic */ return true; @@ -147,7 +147,7 @@ bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct mlx5e_dma_info *di, __set_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags); __set_bit(MLX5E_RQ_FLAG_XDP_REDIRECT, rq->flags); if (xdp->rxq->mem.type != MEM_TYPE_XSK_BUFF_POOL) - mlx5e_page_dma_unmap(rq, di); + mlx5e_page_dma_unmap(rq, page); rq->stats->xdp_redirect++; return true; default: @@ -384,7 +384,7 @@ static void mlx5e_free_xdpsq_desc(struct mlx5e_xdpsq *sq, break; case MLX5E_XDP_XMIT_MODE_PAGE: /* XDP_TX from the regular RQ */ - mlx5e_page_release_dynamic(xdpi.page.rq, &xdpi.page.di, recycle); + mlx5e_page_release_dynamic(xdpi.page.rq, xdpi.page.page, recycle); break; case MLX5E_XDP_XMIT_MODE_XSK: /* AF_XDP send */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h index 20d8af66c072..8a92cf007991 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h @@ -47,7 +47,7 @@ struct mlx5e_xsk_param; int mlx5e_xdp_max_mtu(struct mlx5e_params *params, struct mlx5e_xsk_param *xsk); -bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct mlx5e_dma_info *di, +bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct page *page, struct bpf_prog *prog, struct xdp_buff *xdp); void mlx5e_xdp_mpwqe_complete(struct mlx5e_xdpsq *sq); bool mlx5e_poll_xdpsq_cq(struct mlx5e_cq *cq); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index 91b90bbb2b28..f21cae712ce5 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -780,7 +780,7 @@ static void mlx5e_free_rq(struct mlx5e_rq *rq) * entered, and it's safe to call mlx5e_page_release_dynamic * directly. */ - mlx5e_page_release_dynamic(rq, dma_info, false); + mlx5e_page_release_dynamic(rq, dma_info->page, false); } xdp_rxq_info_unreg(&rq->xdp_rxq); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c index e9ad5b3a30ed..56bb58704bf9 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -222,8 +222,7 @@ static inline u32 mlx5e_decompress_cqes_start(struct mlx5e_rq *rq, return mlx5e_decompress_cqes_cont(rq, wq, 1, budget_rem) - 1; } -static inline bool mlx5e_rx_cache_put(struct mlx5e_rq *rq, - struct mlx5e_dma_info *dma_info) +static inline bool mlx5e_rx_cache_put(struct mlx5e_rq *rq, struct page *page) { struct mlx5e_page_cache *cache = &rq->page_cache; u32 tail_next = (cache->tail + 1) & (MLX5E_CACHE_SIZE - 1); @@ -234,12 +233,13 @@ static inline bool mlx5e_rx_cache_put(struct mlx5e_rq *rq, return false; } - if (!dev_page_is_reusable(dma_info->page)) { + if (!dev_page_is_reusable(page)) { stats->cache_waive++; return false; } - cache->page_cache[cache->tail] = *dma_info; + cache->page_cache[cache->tail].page = page; + cache->page_cache[cache->tail].addr = page_pool_get_dma_addr(page); cache->tail = tail_next; return true; } @@ -287,6 +287,7 @@ static inline int mlx5e_page_alloc_pool(struct mlx5e_rq *rq, dma_info->page = NULL; return -ENOMEM; } + page_pool_set_dma_addr(dma_info->page, dma_info->addr); return 0; } @@ -300,26 +301,27 @@ static inline int mlx5e_page_alloc(struct mlx5e_rq *rq, return mlx5e_page_alloc_pool(rq, dma_info); } -void mlx5e_page_dma_unmap(struct mlx5e_rq *rq, struct mlx5e_dma_info *dma_info) +void mlx5e_page_dma_unmap(struct mlx5e_rq *rq, struct page *page) { - dma_unmap_page_attrs(rq->pdev, dma_info->addr, PAGE_SIZE, rq->buff.map_dir, + dma_addr_t dma_addr = page_pool_get_dma_addr(page); + + dma_unmap_page_attrs(rq->pdev, dma_addr, PAGE_SIZE, rq->buff.map_dir, DMA_ATTR_SKIP_CPU_SYNC); + page_pool_set_dma_addr(page, 0); } -void mlx5e_page_release_dynamic(struct mlx5e_rq *rq, - struct mlx5e_dma_info *dma_info, - bool recycle) +void mlx5e_page_release_dynamic(struct mlx5e_rq *rq, struct page *page, bool recycle) { if (likely(recycle)) { - if (mlx5e_rx_cache_put(rq, dma_info)) + if (mlx5e_rx_cache_put(rq, page)) return; - mlx5e_page_dma_unmap(rq, dma_info); - page_pool_recycle_direct(rq->page_pool, dma_info->page); + mlx5e_page_dma_unmap(rq, page); + page_pool_recycle_direct(rq->page_pool, page); } else { - mlx5e_page_dma_unmap(rq, dma_info); - page_pool_release_page(rq->page_pool, dma_info->page); - put_page(dma_info->page); + mlx5e_page_dma_unmap(rq, page); + page_pool_release_page(rq->page_pool, page); + put_page(page); } } @@ -334,7 +336,7 @@ static inline void mlx5e_page_release(struct mlx5e_rq *rq, */ xsk_buff_free(dma_info->xsk); else - mlx5e_page_release_dynamic(rq, dma_info, recycle); + mlx5e_page_release_dynamic(rq, dma_info->page, recycle); } static inline int mlx5e_get_rx_frag(struct mlx5e_rq *rq, @@ -1544,7 +1546,7 @@ mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe, net_prefetchw(va); /* xdp_frame data area */ mlx5e_fill_xdp_buff(rq, va, rx_headroom, cqe_bcnt, &xdp); - if (mlx5e_xdp_handle(rq, di, prog, &xdp)) + if (mlx5e_xdp_handle(rq, di->page, prog, &xdp)) return NULL; /* page/packet was consumed by XDP */ rx_headroom = xdp.data - xdp.data_hard_start; @@ -1632,7 +1634,7 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe, di = head_wi->di; prog = rcu_dereference(rq->xdp_prog); - if (prog && mlx5e_xdp_handle(rq, di, prog, &xdp)) { + if (prog && mlx5e_xdp_handle(rq, di->page, prog, &xdp)) { if (test_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) { int i; @@ -1934,7 +1936,7 @@ mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, net_prefetchw(va); /* xdp_frame data area */ mlx5e_fill_xdp_buff(rq, va, rx_headroom, cqe_bcnt, &xdp); - if (mlx5e_xdp_handle(rq, di, prog, &xdp)) { + if (mlx5e_xdp_handle(rq, di->page, prog, &xdp)) { if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) __set_bit(page_idx, wi->xdp_xmit_bitmap); /* non-atomic */ return NULL; /* page/packet was consumed by XDP */ From patchwork Fri Mar 18 20:52:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12785928 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6CB5C433F5 for ; Fri, 18 Mar 2022 20:53:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240873AbiCRUyj (ORCPT ); Fri, 18 Mar 2022 16:54:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44590 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240718AbiCRUyd (ORCPT ); Fri, 18 Mar 2022 16:54:33 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6DB42DEE4 for ; Fri, 18 Mar 2022 13:53:13 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id EDC0560CA5 for ; Fri, 18 Mar 2022 20:53:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 811E7C340F3; Fri, 18 Mar 2022 20:53:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1647636792; bh=uO5jr9vKwn1uneBJ3HHNOV7EdeAk4VOuZ5IQ4RY9mc8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JwOIEBChXGEtvWLNUtFMi99HuA5vw+GMmpQQYMB/IJfKetdzFjnDgErbL+koWs/yl /kzEn8AfcZOdm5WezuNrEmzjzriEXTCxtLBpxaYtBdgE+5Xu7eunPwD7qXcqm6RV+o Yp6VxFD0aP63DtlgM7/P7CmSlwLU/Wy/wUCaR+NZI5omXrpMIus284vENcf7boNMBN jRsB6lHhaIxaHTWfb7rbitpoqRcXKVLZN4bVdOH1LZLW0VSI7HuHWANJYFlO73gAfI zssMYXtLmrY0SqMkQ6iMPxJr/Au4OqQtDKu9+vLSRaQpAgcDIdZYOoWduf9ll7458V y3stUq8S4iF5w== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Maxim Mikityanskiy , Saeed Mahameed Subject: [net-next 06/15] net/mlx5e: Move mlx5e_xdpi_fifo_push out of xmit_xdp_frame Date: Fri, 18 Mar 2022 13:52:39 -0700 Message-Id: <20220318205248.33367-7-saeed@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220318205248.33367-1-saeed@kernel.org> References: <20220318205248.33367-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Maxim Mikityanskiy The implementations of xmit_xdp_frame get the xdpi parameter of type struct mlx5e_xdp_info for the sole purpose of calling mlx5e_xdpi_fifo_push() on success. This commit moves this call outside of xmit_xdp_frame, shifting this responsibility to the caller. It will allow more fine-grained handling of XDP info for cases when an xdp_frame is fragmented. Signed-off-by: Maxim Mikityanskiy Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en.h | 1 - .../net/ethernet/mellanox/mlx5/core/en/xdp.c | 17 ++++++++++------- .../net/ethernet/mellanox/mlx5/core/en/xdp.h | 2 -- .../net/ethernet/mellanox/mlx5/core/en/xsk/tx.c | 4 +++- 4 files changed, 13 insertions(+), 11 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h index f5b2449fa15a..d084f930bb37 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -537,7 +537,6 @@ struct mlx5e_xdpsq; typedef int (*mlx5e_fp_xmit_xdp_frame_check)(struct mlx5e_xdpsq *); typedef bool (*mlx5e_fp_xmit_xdp_frame)(struct mlx5e_xdpsq *, struct mlx5e_xmit_data *, - struct mlx5e_xdp_info *, int); struct mlx5e_xdpsq { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c index 91dd5c59657b..b1114f854057 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c @@ -119,8 +119,12 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, xdpi.page.page = page; } - return INDIRECT_CALL_2(sq->xmit_xdp_frame, mlx5e_xmit_xdp_frame_mpwqe, - mlx5e_xmit_xdp_frame, sq, &xdptxd, &xdpi, 0); + if (unlikely(!INDIRECT_CALL_2(sq->xmit_xdp_frame, mlx5e_xmit_xdp_frame_mpwqe, + mlx5e_xmit_xdp_frame, sq, &xdptxd, 0))) + return false; + + mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, &xdpi); + return true; } /* returns true if packet was consumed by xdp */ @@ -261,7 +265,7 @@ INDIRECT_CALLABLE_SCOPE int mlx5e_xmit_xdp_frame_check_mpwqe(struct mlx5e_xdpsq INDIRECT_CALLABLE_SCOPE bool mlx5e_xmit_xdp_frame_mpwqe(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd, - struct mlx5e_xdp_info *xdpi, int check_result) + int check_result) { struct mlx5e_tx_mpwqe *session = &sq->mpwqe; struct mlx5e_xdpsq_stats *stats = sq->stats; @@ -289,7 +293,6 @@ mlx5e_xmit_xdp_frame_mpwqe(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptx if (unlikely(mlx5e_xdp_mpqwe_is_full(session, sq->max_sq_mpw_wqebbs))) mlx5e_xdp_mpwqe_complete(sq); - mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, xdpi); stats->xmit++; return true; } @@ -308,7 +311,7 @@ INDIRECT_CALLABLE_SCOPE int mlx5e_xmit_xdp_frame_check(struct mlx5e_xdpsq *sq) INDIRECT_CALLABLE_SCOPE bool mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd, - struct mlx5e_xdp_info *xdpi, int check_result) + int check_result) { struct mlx5_wq_cyc *wq = &sq->wq; u16 pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc); @@ -358,7 +361,6 @@ mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd, sq->doorbell_cseg = cseg; - mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, xdpi); stats->xmit++; return true; } @@ -537,12 +539,13 @@ int mlx5e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames, xdpi.frame.dma_addr = xdptxd.dma_addr; ret = INDIRECT_CALL_2(sq->xmit_xdp_frame, mlx5e_xmit_xdp_frame_mpwqe, - mlx5e_xmit_xdp_frame, sq, &xdptxd, &xdpi, 0); + mlx5e_xmit_xdp_frame, sq, &xdptxd, 0); if (unlikely(!ret)) { dma_unmap_single(sq->pdev, xdptxd.dma_addr, xdptxd.len, DMA_TO_DEVICE); break; } + mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, &xdpi); nxmit++; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h index 8a92cf007991..ce31828b6c19 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h @@ -59,11 +59,9 @@ int mlx5e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames, INDIRECT_CALLABLE_DECLARE(bool mlx5e_xmit_xdp_frame_mpwqe(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd, - struct mlx5e_xdp_info *xdpi, int check_result)); INDIRECT_CALLABLE_DECLARE(bool mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd, - struct mlx5e_xdp_info *xdpi, int check_result)); INDIRECT_CALLABLE_DECLARE(int mlx5e_xmit_xdp_frame_check_mpwqe(struct mlx5e_xdpsq *sq)); INDIRECT_CALLABLE_DECLARE(int mlx5e_xmit_xdp_frame_check(struct mlx5e_xdpsq *sq)); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/tx.c index 8e96260fce1d..5a889835039c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/tx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/tx.c @@ -103,12 +103,14 @@ bool mlx5e_xsk_tx(struct mlx5e_xdpsq *sq, unsigned int budget) xsk_buff_raw_dma_sync_for_device(pool, xdptxd.dma_addr, xdptxd.len); ret = INDIRECT_CALL_2(sq->xmit_xdp_frame, mlx5e_xmit_xdp_frame_mpwqe, - mlx5e_xmit_xdp_frame, sq, &xdptxd, &xdpi, check_result); + mlx5e_xmit_xdp_frame, sq, &xdptxd, check_result); if (unlikely(!ret)) { if (sq->mpwqe.wqe) mlx5e_xdp_mpwqe_complete(sq); mlx5e_xsk_tx_post_err(sq, &xdpi); + } else { + mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, &xdpi); } flush = true; From patchwork Fri Mar 18 20:52:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12785924 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0BB11C433EF for ; Fri, 18 Mar 2022 20:53:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240844AbiCRUye (ORCPT ); Fri, 18 Mar 2022 16:54:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44592 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240809AbiCRUyd (ORCPT ); Fri, 18 Mar 2022 16:54:33 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D9893BE31 for ; Fri, 18 Mar 2022 13:53:13 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 72EC960DC0 for ; Fri, 18 Mar 2022 20:53:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EE52DC340F0; Fri, 18 Mar 2022 20:53:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1647636793; bh=GcxCSUlc814iFM5sQq1lXA3WDO3U1eqAFBJeKJ3gz40=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YgsJsujt6DEKT4dDp/dVvAIfrqXNoOaEAOk5Gl4vCFTmMQMS8cMOP6oEtAJ9r0h/Y 1VqkLg/QxTtYdUG2kgVidZr5hyP1ja1By3ynSv+TZjpBtW69hkUHNr1ntTuxiSd812 5Vy4tr4uZsrFgVeHWGI9cFg9FqalgUmH6EB1NIxpD1xulyyD0iBq13rKA+0h5CNnAG c90ub3/Pn7sqe+t3gEvMH5Zy9ndZAOaL5NBZUh8OGejC87QHXFy+RP03tdYcyJEZXq MHEJy+O+HQ/R0Cy5TRC8n0FfYgjI1bUwZncy08l9oejxe5DEON7dTcV7yVMd7ZH1ay fctO6AmL1Lq+A== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Maxim Mikityanskiy , Tariq Toukan , Saeed Mahameed Subject: [net-next 07/15] net/mlx5e: Remove assignment of inline_hdr.sz on XDP TX Date: Fri, 18 Mar 2022 13:52:40 -0700 Message-Id: <20220318205248.33367-8-saeed@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220318205248.33367-1-saeed@kernel.org> References: <20220318205248.33367-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Maxim Mikityanskiy When MPWQE is disabled, mlx5e_open_xdpsq prefills the common fields of WQEs in the XDP SQ to save time when sending packets. One of such fields is eseg->inline_hdr.sz, which can be either 0 or MLX5E_XDP_MIN_INLINE, depending on the inline mode of the SQ. The inline mode can't change during the lifetime of the SQ, so setting this field again in mlx5e_xmit_xdp_frame is redundant. Moreover, the xmit function only sets it to MLX5E_XDP_MIN_INLINE, but not to 0 in the other case. This commit removes the redundant assignment in mlx5e_xmit_xdp_frame. Signed-off-by: Maxim Mikityanskiy Reviewed-by: Tariq Toukan Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c index b1114f854057..9478df6c87bc 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c @@ -343,7 +343,6 @@ mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd, /* copy the inline part if required */ if (sq->min_inline_mode != MLX5_INLINE_MODE_NONE) { memcpy(eseg->inline_hdr.start, xdptxd->data, sizeof(eseg->inline_hdr.start)); - eseg->inline_hdr.sz = cpu_to_be16(MLX5E_XDP_MIN_INLINE); memcpy(dseg, xdptxd->data + sizeof(eseg->inline_hdr.start), MLX5E_XDP_MIN_INLINE - sizeof(eseg->inline_hdr.start)); dma_len -= MLX5E_XDP_MIN_INLINE; From patchwork Fri Mar 18 20:52:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12785935 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB740C433EF for ; Fri, 18 Mar 2022 20:53:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240889AbiCRUyt (ORCPT ); Fri, 18 Mar 2022 16:54:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44722 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240868AbiCRUyf (ORCPT ); Fri, 18 Mar 2022 16:54:35 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 55474DF34 for ; Fri, 18 Mar 2022 13:53:16 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 070CAB82580 for ; Fri, 18 Mar 2022 20:53:15 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 71772C340F4; Fri, 18 Mar 2022 20:53:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1647636793; bh=DyNKCwiBmYfrebQN2lwtu/b5agGK2JOsbVPEn5P1uxk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YWx8fplj3iY3z+w6bXrWGADQxQeS3polSBS+gZUV2yHASK7FMmpImo1Ozu6oKiFK9 /onLKHjPP1sRkSoUwOWJQrsH4M9Y1lRsybZVOTNlxkpeRuaWof6ARHbNCfCF/hNLYv RZEAp+qeNBbYBGc/djRFrmKJ3MbSSct8Q+UFUOBTmyxoNgC5TSlViC3gzQzatG41h5 IUw3298GluYIjsssGhsqBQQPgqBEVr6V+XCkkVGTNHQbiw23sS+ni3PIRxcYVjXhNu TXab8+mQsBL5TQDZ2yHGq3MPAVdLjoTzrHJDjdz74+T0Pss5jfLsqchMq2++Gv7y9m dRUux6BL48Yag== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Maxim Mikityanskiy , Saeed Mahameed Subject: [net-next 08/15] net/mlx5e: Don't prefill WQEs in XDP SQ in the multi buffer mode Date: Fri, 18 Mar 2022 13:52:41 -0700 Message-Id: <20220318205248.33367-9-saeed@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220318205248.33367-1-saeed@kernel.org> References: <20220318205248.33367-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Maxim Mikityanskiy When MPWQE is disabled, mlx5e_open_xdpsq() prefills the common fields of WQEs in the XDP SQ to save time when sending packets. mlx5e_xmit_xdp_frame() runs on the prefilled fields, however, sending multi buffer XDP frames would require changing some of these fields on a per-packet basis. Besides that, mlx5e_xmit_xdp_frame() will be used as a fallback to send multi buffer XDP frames when MPWQE is enabled (MPWQE can only handle linear packets). In order to prepare for XDP multi buffer support, this commit introduces a mode for mlx5e_xmit_xdp_frame() that fills all the fields itself. Signed-off-by: Maxim Mikityanskiy Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en.h | 1 + .../ethernet/mellanox/mlx5/core/en/params.c | 4 ++- .../ethernet/mellanox/mlx5/core/en/params.h | 2 ++ .../net/ethernet/mellanox/mlx5/core/en/xdp.c | 32 +++++++++++++++++-- .../mellanox/mlx5/core/en/xsk/setup.c | 2 +- .../net/ethernet/mellanox/mlx5/core/en_main.c | 10 +++++- 6 files changed, 46 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h index d084f930bb37..cf935a7bf387 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -405,6 +405,7 @@ enum { MLX5E_SQ_STATE_VLAN_NEED_L2_INLINE, MLX5E_SQ_STATE_PENDING_XSK_TX, MLX5E_SQ_STATE_PENDING_TLS_RX_RESYNC, + MLX5E_SQ_STATE_XDP_MULTIBUF, }; struct mlx5e_tx_mpwqe { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c index 9646867872c1..08fd1370a8b0 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c @@ -843,6 +843,7 @@ static void mlx5e_build_async_icosq_param(struct mlx5_core_dev *mdev, void mlx5e_build_xdpsq_param(struct mlx5_core_dev *mdev, struct mlx5e_params *params, + struct mlx5e_xsk_param *xsk, struct mlx5e_sq_param *param) { void *sqc = param->sqc; @@ -851,6 +852,7 @@ void mlx5e_build_xdpsq_param(struct mlx5_core_dev *mdev, mlx5e_build_sq_param_common(mdev, param); MLX5_SET(wq, wq, log_wq_sz, params->log_sq_size); param->is_mpw = MLX5E_GET_PFLAG(params, MLX5E_PFLAG_XDP_TX_MPWQE); + param->is_xdp_mb = !mlx5e_rx_is_linear_skb(params, xsk); mlx5e_build_tx_cq_param(mdev, params, ¶m->cqp); } @@ -870,7 +872,7 @@ int mlx5e_build_channel_param(struct mlx5_core_dev *mdev, async_icosq_log_wq_sz = mlx5e_build_async_icosq_log_wq_sz(mdev); mlx5e_build_sq_param(mdev, params, &cparam->txq_sq); - mlx5e_build_xdpsq_param(mdev, params, &cparam->xdp_sq); + mlx5e_build_xdpsq_param(mdev, params, NULL, &cparam->xdp_sq); mlx5e_build_icosq_param(mdev, icosq_log_wq_sz, &cparam->icosq); mlx5e_build_async_icosq_param(mdev, async_icosq_log_wq_sz, &cparam->async_icosq); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/params.h b/drivers/net/ethernet/mellanox/mlx5/core/en/params.h index 47a368112e31..f5c46e78eebc 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/params.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/params.h @@ -31,6 +31,7 @@ struct mlx5e_sq_param { struct mlx5_wq_param wq; bool is_mpw; bool is_tls; + bool is_xdp_mb; u16 stop_room; }; @@ -155,6 +156,7 @@ void mlx5e_build_tx_cq_param(struct mlx5_core_dev *mdev, struct mlx5e_cq_param *param); void mlx5e_build_xdpsq_param(struct mlx5_core_dev *mdev, struct mlx5e_params *params, + struct mlx5e_xsk_param *xsk, struct mlx5e_sq_param *param); int mlx5e_build_channel_param(struct mlx5_core_dev *mdev, struct mlx5e_params *params, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c index 9478df6c87bc..b220e613d102 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c @@ -323,6 +323,7 @@ mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd, dma_addr_t dma_addr = xdptxd->dma_addr; u32 dma_len = xdptxd->len; + u16 ds_cnt, inline_hdr_sz; struct mlx5e_xdpsq_stats *stats = sq->stats; @@ -338,7 +339,8 @@ mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd, if (unlikely(check_result < 0)) return false; - cseg->fm_ce_se = 0; + ds_cnt = MLX5E_TX_WQE_EMPTY_DS_COUNT + 1; + inline_hdr_sz = 0; /* copy the inline part if required */ if (sq->min_inline_mode != MLX5_INLINE_MODE_NONE) { @@ -347,7 +349,9 @@ mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd, MLX5E_XDP_MIN_INLINE - sizeof(eseg->inline_hdr.start)); dma_len -= MLX5E_XDP_MIN_INLINE; dma_addr += MLX5E_XDP_MIN_INLINE; + inline_hdr_sz = MLX5E_XDP_MIN_INLINE; dseg++; + ds_cnt++; } /* write the dma part */ @@ -356,7 +360,31 @@ mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd, cseg->opmod_idx_opcode = cpu_to_be32((sq->pc << 8) | MLX5_OPCODE_SEND); - sq->pc++; + if (unlikely(test_bit(MLX5E_SQ_STATE_XDP_MULTIBUF, &sq->state))) { + u8 num_pkts = 1; + u8 num_wqebbs; + + memset(&cseg->signature, 0, sizeof(*cseg) - + sizeof(cseg->opmod_idx_opcode) - sizeof(cseg->qpn_ds)); + memset(eseg, 0, sizeof(*eseg) - sizeof(eseg->trailer)); + + eseg->inline_hdr.sz = cpu_to_be16(inline_hdr_sz); + dseg->lkey = sq->mkey_be; + + cseg->qpn_ds = cpu_to_be32((sq->sqn << 8) | ds_cnt); + + num_wqebbs = DIV_ROUND_UP(ds_cnt, MLX5_SEND_WQEBB_NUM_DS); + sq->db.wqe_info[pi] = (struct mlx5e_xdp_wqe_info) { + .num_wqebbs = num_wqebbs, + .num_pkts = num_pkts, + }; + + sq->pc += num_wqebbs; + } else { + cseg->fm_ce_se = 0; + + sq->pc++; + } sq->doorbell_cseg = cseg; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c index 25eac9e20342..3ad7f1301fa8 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c @@ -43,7 +43,7 @@ static void mlx5e_build_xsk_cparam(struct mlx5_core_dev *mdev, struct mlx5e_channel_param *cparam) { mlx5e_build_rq_param(mdev, params, xsk, q_counter, &cparam->rq); - mlx5e_build_xdpsq_param(mdev, params, &cparam->xdp_sq); + mlx5e_build_xdpsq_param(mdev, params, xsk, &cparam->xdp_sq); } static int mlx5e_init_xsk_rq(struct mlx5e_channel *c, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index f21cae712ce5..95cec2848685 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -1666,13 +1666,21 @@ int mlx5e_open_xdpsq(struct mlx5e_channel *c, struct mlx5e_params *params, csp.wq_ctrl = &sq->wq_ctrl; csp.min_inline_mode = sq->min_inline_mode; set_bit(MLX5E_SQ_STATE_ENABLED, &sq->state); + + /* Don't enable multi buffer on XDP_REDIRECT SQ, as it's not yet + * supported by upstream, and there is no defined trigger to allow + * transmitting redirected multi-buffer frames. + */ + if (param->is_xdp_mb && !is_redirect) + set_bit(MLX5E_SQ_STATE_XDP_MULTIBUF, &sq->state); + err = mlx5e_create_sq_rdy(c->mdev, param, &csp, 0, &sq->sqn); if (err) goto err_free_xdpsq; mlx5e_set_xmit_fp(sq, param->is_mpw); - if (!param->is_mpw) { + if (!param->is_mpw && !test_bit(MLX5E_SQ_STATE_XDP_MULTIBUF, &sq->state)) { unsigned int ds_cnt = MLX5E_XDP_TX_DS_COUNT; unsigned int inline_hdr_sz = 0; int i; From patchwork Fri Mar 18 20:52:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12785933 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94B40C433F5 for ; Fri, 18 Mar 2022 20:53:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240876AbiCRUyp (ORCPT ); Fri, 18 Mar 2022 16:54:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44834 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240871AbiCRUyi (ORCPT ); Fri, 18 Mar 2022 16:54:38 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BB312DF36 for ; Fri, 18 Mar 2022 13:53:16 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 67F29B8257C for ; Fri, 18 Mar 2022 20:53:15 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E3FC8C340F5; Fri, 18 Mar 2022 20:53:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1647636794; bh=X5LXqSq0JP0ENBYfmRtS9PeHsibFM6508qxn/o8ZXJQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=F1UftFgWTTOU58q0nH7yNP3QyFe7CQk2GLR+Zuv8l1b3Q99pk3N8+uQor12BWnf8O 3Y3UHm4GANOp2A7T2QK/RUo1VSRiJRmDpnfhSXra9pU8YM0nf0qseVzlBrDuq9h0LD LQvjKvxNi6F5/vXtW2jW9uWwxBFTPkLx1HM3Og+LzrSSmQd0pKhXWh4Dtyprq4p7B0 fXerXe7ETX68tKUVMga4xh8AIf9BeyUNNHnGUpK4V9MudP3q7cyvrPFIYPgxgDG9So rBwWgIjcysimmqO37MqOLPgKarx0UaqYyol32pBzFdQ4sgNdz6iGhEYqNgfxtRU5V9 +GRDmqnUBOM7Q== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Maxim Mikityanskiy , Saeed Mahameed Subject: [net-next 09/15] net/mlx5e: Implement sending multi buffer XDP frames Date: Fri, 18 Mar 2022 13:52:42 -0700 Message-Id: <20220318205248.33367-10-saeed@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220318205248.33367-1-saeed@kernel.org> References: <20220318205248.33367-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Maxim Mikityanskiy xmit_xdp_frame is extended to support sending fragmented XDP frames. The next commit will start using this functionality. Signed-off-by: Maxim Mikityanskiy Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en.h | 1 + .../net/ethernet/mellanox/mlx5/core/en/xdp.c | 96 +++++++++++++++---- .../net/ethernet/mellanox/mlx5/core/en/xdp.h | 2 + .../ethernet/mellanox/mlx5/core/en/xsk/tx.c | 3 +- 4 files changed, 80 insertions(+), 22 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h index cf935a7bf387..8653ac0fd865 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -538,6 +538,7 @@ struct mlx5e_xdpsq; typedef int (*mlx5e_fp_xmit_xdp_frame_check)(struct mlx5e_xdpsq *); typedef bool (*mlx5e_fp_xmit_xdp_frame)(struct mlx5e_xdpsq *, struct mlx5e_xmit_data *, + struct skb_shared_info *, int); struct mlx5e_xdpsq { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c index b220e613d102..52e0f0028c35 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c @@ -120,7 +120,7 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, } if (unlikely(!INDIRECT_CALL_2(sq->xmit_xdp_frame, mlx5e_xmit_xdp_frame_mpwqe, - mlx5e_xmit_xdp_frame, sq, &xdptxd, 0))) + mlx5e_xmit_xdp_frame, sq, &xdptxd, NULL, 0))) return false; mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, &xdpi); @@ -263,13 +263,27 @@ INDIRECT_CALLABLE_SCOPE int mlx5e_xmit_xdp_frame_check_mpwqe(struct mlx5e_xdpsq return MLX5E_XDP_CHECK_OK; } +INDIRECT_CALLABLE_SCOPE bool +mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd, + struct skb_shared_info *sinfo, int check_result); + INDIRECT_CALLABLE_SCOPE bool mlx5e_xmit_xdp_frame_mpwqe(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd, - int check_result) + struct skb_shared_info *sinfo, int check_result) { struct mlx5e_tx_mpwqe *session = &sq->mpwqe; struct mlx5e_xdpsq_stats *stats = sq->stats; + if (unlikely(sinfo)) { + /* MPWQE is enabled, but a multi-buffer packet is queued for + * transmission. MPWQE can't send fragmented packets, so close + * the current session and fall back to a regular WQE. + */ + if (unlikely(sq->mpwqe.wqe)) + mlx5e_xdp_mpwqe_complete(sq); + return mlx5e_xmit_xdp_frame(sq, xdptxd, sinfo, 0); + } + if (unlikely(xdptxd->len > sq->hw_mtu)) { stats->err++; return false; @@ -297,9 +311,9 @@ mlx5e_xmit_xdp_frame_mpwqe(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptx return true; } -INDIRECT_CALLABLE_SCOPE int mlx5e_xmit_xdp_frame_check(struct mlx5e_xdpsq *sq) +static int mlx5e_xmit_xdp_frame_check_stop_room(struct mlx5e_xdpsq *sq, int stop_room) { - if (unlikely(!mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc, 1))) { + if (unlikely(!mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc, stop_room))) { /* SQ is full, ring doorbell */ mlx5e_xmit_xdp_doorbell(sq); sq->stats->full++; @@ -309,37 +323,66 @@ INDIRECT_CALLABLE_SCOPE int mlx5e_xmit_xdp_frame_check(struct mlx5e_xdpsq *sq) return MLX5E_XDP_CHECK_OK; } +INDIRECT_CALLABLE_SCOPE int mlx5e_xmit_xdp_frame_check(struct mlx5e_xdpsq *sq) +{ + return mlx5e_xmit_xdp_frame_check_stop_room(sq, 1); +} + INDIRECT_CALLABLE_SCOPE bool mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd, - int check_result) + struct skb_shared_info *sinfo, int check_result) { struct mlx5_wq_cyc *wq = &sq->wq; - u16 pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc); - struct mlx5e_tx_wqe *wqe = mlx5_wq_cyc_get_wqe(wq, pi); - - struct mlx5_wqe_ctrl_seg *cseg = &wqe->ctrl; - struct mlx5_wqe_eth_seg *eseg = &wqe->eth; - struct mlx5_wqe_data_seg *dseg = wqe->data; + struct mlx5_wqe_ctrl_seg *cseg; + struct mlx5_wqe_data_seg *dseg; + struct mlx5_wqe_eth_seg *eseg; + struct mlx5e_tx_wqe *wqe; dma_addr_t dma_addr = xdptxd->dma_addr; u32 dma_len = xdptxd->len; u16 ds_cnt, inline_hdr_sz; + u8 num_wqebbs = 1; + int num_frags = 0; + u16 pi; struct mlx5e_xdpsq_stats *stats = sq->stats; - net_prefetchw(wqe); - if (unlikely(dma_len < MLX5E_XDP_MIN_INLINE || sq->hw_mtu < dma_len)) { stats->err++; return false; } - if (!check_result) - check_result = mlx5e_xmit_xdp_frame_check(sq); + ds_cnt = MLX5E_TX_WQE_EMPTY_DS_COUNT + 1; + if (sq->min_inline_mode != MLX5_INLINE_MODE_NONE) + ds_cnt++; + + /* check_result must be 0 if sinfo is passed. */ + if (!check_result) { + int stop_room = 1; + + if (unlikely(sinfo)) { + ds_cnt += sinfo->nr_frags; + num_frags = sinfo->nr_frags; + num_wqebbs = DIV_ROUND_UP(ds_cnt, MLX5_SEND_WQEBB_NUM_DS); + /* Assuming MLX5_CAP_GEN(mdev, max_wqe_sz_sq) is big + * enough to hold all fragments. + */ + stop_room = MLX5E_STOP_ROOM(num_wqebbs); + } + + check_result = mlx5e_xmit_xdp_frame_check_stop_room(sq, stop_room); + } if (unlikely(check_result < 0)) return false; - ds_cnt = MLX5E_TX_WQE_EMPTY_DS_COUNT + 1; + pi = mlx5e_xdpsq_get_next_pi(sq, num_wqebbs); + wqe = mlx5_wq_cyc_get_wqe(wq, pi); + net_prefetchw(wqe); + + cseg = &wqe->ctrl; + eseg = &wqe->eth; + dseg = wqe->data; + inline_hdr_sz = 0; /* copy the inline part if required */ @@ -351,7 +394,6 @@ mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd, dma_addr += MLX5E_XDP_MIN_INLINE; inline_hdr_sz = MLX5E_XDP_MIN_INLINE; dseg++; - ds_cnt++; } /* write the dma part */ @@ -361,8 +403,8 @@ mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd, cseg->opmod_idx_opcode = cpu_to_be32((sq->pc << 8) | MLX5_OPCODE_SEND); if (unlikely(test_bit(MLX5E_SQ_STATE_XDP_MULTIBUF, &sq->state))) { - u8 num_pkts = 1; - u8 num_wqebbs; + u8 num_pkts = 1 + num_frags; + int i; memset(&cseg->signature, 0, sizeof(*cseg) - sizeof(cseg->opmod_idx_opcode) - sizeof(cseg->qpn_ds)); @@ -371,9 +413,21 @@ mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd, eseg->inline_hdr.sz = cpu_to_be16(inline_hdr_sz); dseg->lkey = sq->mkey_be; + for (i = 0; i < num_frags; i++) { + skb_frag_t *frag = &sinfo->frags[i]; + dma_addr_t addr; + + addr = page_pool_get_dma_addr(skb_frag_page(frag)) + + skb_frag_off(frag); + + dseg++; + dseg->addr = cpu_to_be64(addr); + dseg->byte_count = cpu_to_be32(skb_frag_size(frag)); + dseg->lkey = sq->mkey_be; + } + cseg->qpn_ds = cpu_to_be32((sq->sqn << 8) | ds_cnt); - num_wqebbs = DIV_ROUND_UP(ds_cnt, MLX5_SEND_WQEBB_NUM_DS); sq->db.wqe_info[pi] = (struct mlx5e_xdp_wqe_info) { .num_wqebbs = num_wqebbs, .num_pkts = num_pkts, @@ -566,7 +620,7 @@ int mlx5e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames, xdpi.frame.dma_addr = xdptxd.dma_addr; ret = INDIRECT_CALL_2(sq->xmit_xdp_frame, mlx5e_xmit_xdp_frame_mpwqe, - mlx5e_xmit_xdp_frame, sq, &xdptxd, 0); + mlx5e_xmit_xdp_frame, sq, &xdptxd, NULL, 0); if (unlikely(!ret)) { dma_unmap_single(sq->pdev, xdptxd.dma_addr, xdptxd.len, DMA_TO_DEVICE); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h index ce31828b6c19..4868fa1b82d1 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h @@ -59,9 +59,11 @@ int mlx5e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames, INDIRECT_CALLABLE_DECLARE(bool mlx5e_xmit_xdp_frame_mpwqe(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd, + struct skb_shared_info *sinfo, int check_result)); INDIRECT_CALLABLE_DECLARE(bool mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd, + struct skb_shared_info *sinfo, int check_result)); INDIRECT_CALLABLE_DECLARE(int mlx5e_xmit_xdp_frame_check_mpwqe(struct mlx5e_xdpsq *sq)); INDIRECT_CALLABLE_DECLARE(int mlx5e_xmit_xdp_frame_check(struct mlx5e_xdpsq *sq)); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/tx.c index 5a889835039c..3ec0c17db010 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/tx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/tx.c @@ -103,7 +103,8 @@ bool mlx5e_xsk_tx(struct mlx5e_xdpsq *sq, unsigned int budget) xsk_buff_raw_dma_sync_for_device(pool, xdptxd.dma_addr, xdptxd.len); ret = INDIRECT_CALL_2(sq->xmit_xdp_frame, mlx5e_xmit_xdp_frame_mpwqe, - mlx5e_xmit_xdp_frame, sq, &xdptxd, check_result); + mlx5e_xmit_xdp_frame, sq, &xdptxd, NULL, + check_result); if (unlikely(!ret)) { if (sq->mpwqe.wqe) mlx5e_xdp_mpwqe_complete(sq); From patchwork Fri Mar 18 20:52:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12785932 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50AB3C433FE for ; Fri, 18 Mar 2022 20:53:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240884AbiCRUyo (ORCPT ); Fri, 18 Mar 2022 16:54:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44682 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240863AbiCRUyf (ORCPT ); Fri, 18 Mar 2022 16:54:35 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A2B0CDECE for ; Fri, 18 Mar 2022 13:53:15 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 3E5D760DB9 for ; Fri, 18 Mar 2022 20:53:15 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 616B0C340E8; Fri, 18 Mar 2022 20:53:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1647636794; bh=TP6PdUvovSb01HJ++NN4PdrlMD7bwKEH3q0W1Cd9Efo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pe/iqAE40KHXkWn0pG+Ib7n5zAeJopKAdUJ6vaq/O8LxLiRsfuIFkNtq8hd4fCiCP vle1dExcXEeJeUZIdDPDX2/ev7f3Yg5AuyelLo+nPzUi4sjS86yrI35drrxT+dG55e fmD2Qz8stD08DVVcL/Fmm1CIA3VPNGlO2F6BJiIJlwaoQVu4WzmdKnSPHRWeLcEInV 9qJVtUDxZwqvgy+KivxpBGOCHeukqfaZrnMIgNxljg7q+v26Yp/0tyhML7Qx/hnPzi 3SFMqEtV0PTPWe2dCBZ4zA2cJHNL9LXtE5c0dWiReuuhXJfkHnPFWyeB66zGk0yUOI iHdLX7sk59TeA== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Maxim Mikityanskiy , Saeed Mahameed Subject: [net-next 10/15] net/mlx5e: Unindent the else-block in mlx5e_xmit_xdp_buff Date: Fri, 18 Mar 2022 13:52:43 -0700 Message-Id: <20220318205248.33367-11-saeed@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220318205248.33367-1-saeed@kernel.org> References: <20220318205248.33367-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Maxim Mikityanskiy The next commit will add more indentation levels to mlx5e_xmit_xdp_buff. To keep indentation minimal, unindent the else-block of the if-statement by doing an early return. Signed-off-by: Maxim Mikityanskiy Signed-off-by: Saeed Mahameed --- .../net/ethernet/mellanox/mlx5/core/en/xdp.c | 34 +++++++++++-------- 1 file changed, 20 insertions(+), 14 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c index 52e0f0028c35..368e54949614 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c @@ -101,24 +101,30 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, xdptxd.dma_addr = dma_addr; xdpi.frame.xdpf = xdpf; xdpi.frame.dma_addr = dma_addr; - } else { - /* Driver assumes that xdp_convert_buff_to_frame returns - * an xdp_frame that points to the same memory region as - * the original xdp_buff. It allows to map the memory only - * once and to use the DMA_BIDIRECTIONAL mode. - */ - - xdpi.mode = MLX5E_XDP_XMIT_MODE_PAGE; - dma_addr = page_pool_get_dma_addr(page) + (xdpf->data - (void *)xdpf); - dma_sync_single_for_device(sq->pdev, dma_addr, xdptxd.len, - DMA_TO_DEVICE); + if (unlikely(!INDIRECT_CALL_2(sq->xmit_xdp_frame, mlx5e_xmit_xdp_frame_mpwqe, + mlx5e_xmit_xdp_frame, sq, &xdptxd, NULL, 0))) + return false; - xdptxd.dma_addr = dma_addr; - xdpi.page.rq = rq; - xdpi.page.page = page; + mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, &xdpi); + return true; } + /* Driver assumes that xdp_convert_buff_to_frame returns an xdp_frame + * that points to the same memory region as the original xdp_buff. It + * allows to map the memory only once and to use the DMA_BIDIRECTIONAL + * mode. + */ + + xdpi.mode = MLX5E_XDP_XMIT_MODE_PAGE; + + dma_addr = page_pool_get_dma_addr(page) + (xdpf->data - (void *)xdpf); + dma_sync_single_for_device(sq->pdev, dma_addr, xdptxd.len, DMA_TO_DEVICE); + + xdptxd.dma_addr = dma_addr; + xdpi.page.rq = rq; + xdpi.page.page = page; + if (unlikely(!INDIRECT_CALL_2(sq->xmit_xdp_frame, mlx5e_xmit_xdp_frame_mpwqe, mlx5e_xmit_xdp_frame, sq, &xdptxd, NULL, 0))) return false; From patchwork Fri Mar 18 20:52:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12785936 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 745F1C433EF for ; Fri, 18 Mar 2022 20:53:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240898AbiCRUyv (ORCPT ); Fri, 18 Mar 2022 16:54:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44692 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240864AbiCRUyf (ORCPT ); Fri, 18 Mar 2022 16:54:35 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 57A0ADF35 for ; Fri, 18 Mar 2022 13:53:16 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id E843060C96 for ; Fri, 18 Mar 2022 20:53:15 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D0C7DC340EF; Fri, 18 Mar 2022 20:53:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1647636795; bh=nYtNf99CA7LgQGDafp7syF0XytkZyX/9FJN8VJu90Q4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qxWcd9x8gBUFO9+3vy7GqThUhFbG4PMWCU6ZpMRrOp7RtWVMPlGLdJfjMzGNyWhuM e0/rnoagink6gPattCicftrpDVg5DywtEl+OG7li3w9VI/RpMUgJ5si9aXmnqf1cGx jaaRwSr6cqNFk4nAe+k+5v68jBtnkhcn08IX4vFFF+hS+uAztHNk95YC6aHlNkuMQY 2/XBYFK5kvo+2qO1TPf+NRZx8EfPLOxfRevjgYBG7BKWDWhkB/ERWcw4QsrANU7WrX qfj4GpMkaVbcIL73UOAM700VZSmbRXwkYZAHDEaKR3oktkTujBgGTocTDARu0dyAWU 8RvCN50m0wwJQ== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Maxim Mikityanskiy , Saeed Mahameed Subject: [net-next 11/15] net/mlx5e: Support multi buffer XDP_TX Date: Fri, 18 Mar 2022 13:52:44 -0700 Message-Id: <20220318205248.33367-12-saeed@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220318205248.33367-1-saeed@kernel.org> References: <20220318205248.33367-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Maxim Mikityanskiy This commit enables passing multi buffer XDP frames to the TX handlers on XDP_TX. Fragments are DMA synchronized to the device and queued to the xdpi_fifo for a subsequent unmapping. Signed-off-by: Maxim Mikityanskiy Signed-off-by: Saeed Mahameed --- .../net/ethernet/mellanox/mlx5/core/en/xdp.c | 39 +++++++++++++++---- 1 file changed, 31 insertions(+), 8 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c index 368e54949614..f35b62ce4c07 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c @@ -59,20 +59,17 @@ static inline bool mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, struct page *page, struct xdp_buff *xdp) { + struct skb_shared_info *sinfo = NULL; struct mlx5e_xmit_data xdptxd; struct mlx5e_xdp_info xdpi; struct xdp_frame *xdpf; dma_addr_t dma_addr; + int i; xdpf = xdp_convert_buff_to_frame(xdp); if (unlikely(!xdpf)) return false; - if (unlikely(xdp_frame_has_frags(xdpf))) { - xdp_return_frame(xdpf); - return false; - } - xdptxd.data = xdpf->data; xdptxd.len = xdpf->len; @@ -117,19 +114,45 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, */ xdpi.mode = MLX5E_XDP_XMIT_MODE_PAGE; + xdpi.page.rq = rq; dma_addr = page_pool_get_dma_addr(page) + (xdpf->data - (void *)xdpf); dma_sync_single_for_device(sq->pdev, dma_addr, xdptxd.len, DMA_TO_DEVICE); + if (unlikely(xdp_frame_has_frags(xdpf))) { + sinfo = xdp_get_shared_info_from_frame(xdpf); + + for (i = 0; i < sinfo->nr_frags; i++) { + skb_frag_t *frag = &sinfo->frags[i]; + dma_addr_t addr; + u32 len; + + addr = page_pool_get_dma_addr(skb_frag_page(frag)) + + skb_frag_off(frag); + len = skb_frag_size(frag); + dma_sync_single_for_device(sq->pdev, addr, len, + DMA_TO_DEVICE); + } + } + xdptxd.dma_addr = dma_addr; - xdpi.page.rq = rq; - xdpi.page.page = page; if (unlikely(!INDIRECT_CALL_2(sq->xmit_xdp_frame, mlx5e_xmit_xdp_frame_mpwqe, - mlx5e_xmit_xdp_frame, sq, &xdptxd, NULL, 0))) + mlx5e_xmit_xdp_frame, sq, &xdptxd, sinfo, 0))) return false; + xdpi.page.page = page; mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, &xdpi); + + if (unlikely(xdp_frame_has_frags(xdpf))) { + for (i = 0; i < sinfo->nr_frags; i++) { + skb_frag_t *frag = &sinfo->frags[i]; + + xdpi.page.page = skb_frag_page(frag); + mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, &xdpi); + } + } + return true; } From patchwork Fri Mar 18 20:52:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12785938 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E619C433F5 for ; Fri, 18 Mar 2022 20:53:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240886AbiCRUy4 (ORCPT ); Fri, 18 Mar 2022 16:54:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44724 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240869AbiCRUyf (ORCPT ); Fri, 18 Mar 2022 16:54:35 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C1DDCDF38 for ; Fri, 18 Mar 2022 13:53:16 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 60E9660C88 for ; Fri, 18 Mar 2022 20:53:16 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8565EC340ED; Fri, 18 Mar 2022 20:53:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1647636795; bh=+bp7dYWw2dsAo1ajifcnewXIUmMjfZmoadWyoh82uIE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NpxAYh2AkrgVrZzKUp2HRTzvzektQTnrq5pxqZT6zOCbonIk5X2w5NNHDHm7GhRua ysrd+IImdm8OPhbPQvnqoD8CxEcrYed0q2QGgCypwqAjcUtJV0OO2pcXi0cZ9GM+y8 sV7yoCuFDHGWU6EQw1ZIxMmjlVRlbxCRk7Vv35k3tzIpIRLL3203FaPCb7GW9HzPmq 3Cv44/H0WLKeCV8ayRQTpuM82YBiRc5Mx4/EZqK5rs1xJ0ZO+KptdgiILXCmUfKxu4 KR0SjVsBRkqF1cmY48EozHjY6sx0C4ozX0wBLudGac25C/ppj35ysBWSOKYVC+YKoD UGVazBvORYXUg== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Maxim Mikityanskiy , Saeed Mahameed Subject: [net-next 12/15] net/mlx5e: Permit XDP with non-linear legacy RQ Date: Fri, 18 Mar 2022 13:52:45 -0700 Message-Id: <20220318205248.33367-13-saeed@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220318205248.33367-1-saeed@kernel.org> References: <20220318205248.33367-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Maxim Mikityanskiy Now that legacy RQ implements XDP in the non-linear mode, stop blocking this configuration. Allow non-linear mode only for programs aware of multi buffer. XDP performance with linear mode RQ hasn't changed. Baseline (MTU 1500, TX MPWQE, legacy RQ, single core): 60-byte packets, XDP_DROP: 11.25 Mpps 60-byte packets, XDP_TX: 9.0 Mpps 60-byte packets, XDP_PASS: 668 kpps Multi buffer (MTU 9000, TX MPWQE, legacy RQ, single core): 60-byte packets, XDP_DROP: 10.1 Mpps 60-byte packets, XDP_TX: 6.6 Mpps 60-byte packets, XDP_PASS: 658 kpps 8900-byte packets, XDP_DROP: 769 kpps (100% of sent packets) 8900-byte packets, XDP_TX: 674 kpps (100% of sent packets) 8900-byte packets, XDP_PASS: 637 kpps Signed-off-by: Maxim Mikityanskiy Signed-off-by: Saeed Mahameed --- .../net/ethernet/mellanox/mlx5/core/en_main.c | 39 +++++++++++++------ 1 file changed, 27 insertions(+), 12 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index 95cec2848685..3256d2c375c3 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -3953,6 +3953,31 @@ static bool mlx5e_xsk_validate_mtu(struct net_device *netdev, return true; } +static bool mlx5e_params_validate_xdp(struct net_device *netdev, struct mlx5e_params *params) +{ + bool is_linear; + + /* No XSK params: AF_XDP can't be enabled yet at the point of setting + * the XDP program. + */ + is_linear = mlx5e_rx_is_linear_skb(params, NULL); + + if (!is_linear && params->rq_wq_type != MLX5_WQ_TYPE_CYCLIC) { + netdev_warn(netdev, "XDP is not allowed with striding RQ and MTU(%d) > %d\n", + params->sw_mtu, + mlx5e_xdp_max_mtu(params, NULL)); + return false; + } + if (!is_linear && !params->xdp_prog->aux->xdp_has_frags) { + netdev_warn(netdev, "MTU(%d) > %d, too big for an XDP program not aware of multi buffer\n", + params->sw_mtu, + mlx5e_xdp_max_mtu(params, NULL)); + return false; + } + + return true; +} + int mlx5e_change_mtu(struct net_device *netdev, int new_mtu, mlx5e_fp_preactivate preactivate) { @@ -3972,10 +3997,7 @@ int mlx5e_change_mtu(struct net_device *netdev, int new_mtu, if (err) goto out; - if (params->xdp_prog && - !mlx5e_rx_is_linear_skb(&new_params, NULL)) { - netdev_err(netdev, "MTU(%d) > %d is not allowed while XDP enabled\n", - new_mtu, mlx5e_xdp_max_mtu(params, NULL)); + if (new_params.xdp_prog && !mlx5e_params_validate_xdp(netdev, &new_params)) { err = -EINVAL; goto out; } @@ -4458,15 +4480,8 @@ static int mlx5e_xdp_allowed(struct mlx5e_priv *priv, struct bpf_prog *prog) new_params = priv->channels.params; new_params.xdp_prog = prog; - /* No XSK params: AF_XDP can't be enabled yet at the point of setting - * the XDP program. - */ - if (!mlx5e_rx_is_linear_skb(&new_params, NULL)) { - netdev_warn(netdev, "XDP is not allowed with MTU(%d) > %d\n", - new_params.sw_mtu, - mlx5e_xdp_max_mtu(&new_params, NULL)); + if (!mlx5e_params_validate_xdp(netdev, &new_params)) return -EINVAL; - } return 0; } From patchwork Fri Mar 18 20:52:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12785934 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BD172C433EF for ; Fri, 18 Mar 2022 20:53:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240888AbiCRUyr (ORCPT ); Fri, 18 Mar 2022 16:54:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44832 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237255AbiCRUyi (ORCPT ); Fri, 18 Mar 2022 16:54:38 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CF25FDF47 for ; Fri, 18 Mar 2022 13:53:16 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 6BEE560CA0 for ; Fri, 18 Mar 2022 20:53:16 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EF4BFC340F3; Fri, 18 Mar 2022 20:53:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1647636796; bh=6VkgdcoaDx13hSgLW0sXX4QQBOG5rag/ZDUYekCAUFo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kKQPRaZSAp9wqSSdF0NF1bE/H5sv4Trm37b4QNhA3v6aHMSRi3K2R09+EHDZ2R87U 8IZbdMaXPQNLtDL1Ony3//b+tfaiO4fPPDtLBNz3I+HBiOVPYORE8KCdDkX0dl4FuK Hd7GZoxU4kC9CWapn+Hncy4g2auJlWMOeSv1WvKr/RDwaF1wlGg79398eqZzmhvPqt MbX/Oy2zfXkv3pgm1InpqZRgUMRMkRsbt5OBMQ0/mOIZxuCEndZUJwqHeixjfJ4q3g Jp8IWogmtZx+KRmdo6Y4DO4lgXh8voA0XB0U0KI/YB4ECdph9CxNaFbBYrzma4/jTi LvMKPVpd5OQEg== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Maxim Mikityanskiy , Saeed Mahameed Subject: [net-next 13/15] net/mlx5e: Remove MLX5E_XDP_TX_DS_COUNT Date: Fri, 18 Mar 2022 13:52:46 -0700 Message-Id: <20220318205248.33367-14-saeed@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220318205248.33367-1-saeed@kernel.org> References: <20220318205248.33367-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Maxim Mikityanskiy After introducing multi-buffer XDP_TX, the MLX5E_XDP_TX_DS_COUNT define became misleading. It's no longer the DS count of an XDP_TX WQE, this WQE can be longer because of fragments. As this define is only used at one place in mlx5e_open_xdpsq(), it's also not very useful anymore. This commit removes the define and puts the calculation of ds_count for prefilled single-fragment WQEs inline. Signed-off-by: Maxim Mikityanskiy Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h | 1 - drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 2 +- 2 files changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h index 4868fa1b82d1..287e17911251 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h @@ -38,7 +38,6 @@ #include "en/txrx.h" #define MLX5E_XDP_MIN_INLINE (ETH_HLEN + VLAN_HLEN) -#define MLX5E_XDP_TX_DS_COUNT (MLX5E_TX_WQE_EMPTY_DS_COUNT + 1 /* SG DS */) #define MLX5E_XDP_INLINE_WQE_MAX_DS_CNT 16 #define MLX5E_XDP_INLINE_WQE_SZ_THRSD \ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index 3256d2c375c3..2f1dedc721d1 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -1681,7 +1681,7 @@ int mlx5e_open_xdpsq(struct mlx5e_channel *c, struct mlx5e_params *params, mlx5e_set_xmit_fp(sq, param->is_mpw); if (!param->is_mpw && !test_bit(MLX5E_SQ_STATE_XDP_MULTIBUF, &sq->state)) { - unsigned int ds_cnt = MLX5E_XDP_TX_DS_COUNT; + unsigned int ds_cnt = MLX5E_TX_WQE_EMPTY_DS_COUNT + 1; unsigned int inline_hdr_sz = 0; int i; From patchwork Fri Mar 18 20:52:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12785937 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3EF4BC433FE for ; Fri, 18 Mar 2022 20:53:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240913AbiCRUyx (ORCPT ); Fri, 18 Mar 2022 16:54:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44852 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240875AbiCRUyj (ORCPT ); Fri, 18 Mar 2022 16:54:39 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C3DF5DF64 for ; Fri, 18 Mar 2022 13:53:17 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 61A6060DC0 for ; Fri, 18 Mar 2022 20:53:17 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6B746C340F5; Fri, 18 Mar 2022 20:53:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1647636796; bh=/kqVeaCMXgUWOgGPLz2ewrVQVCF5fxnk2BgIQaV+5Is=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=udWo6sHvrzhyJoQzCJWlkBbcprrYoHrdv+n+k1dCxsISKFCXYGbOCFwQoyHblqthX Z+mWjPG6+IsDSeYmQLv3vFddLBNrvYhgseCnVckYcy/v/YlBNHEG1c/b8zTsDX61jm 7N3UdpwvmAZ9pAn/ekgnrFVMzNv1s2rg8GcZrPnsjusjtLtk7xXTGNbt1uNRox8uCp 9rIhelT2Yr0snxv9/mvrLYEO0Fe1OmbvreMA8iNCWxdw3zVDXnCS6d3BOLM2ItAELw 6mp3LrBj1TokX2bLC8odPT8TyAfC/8pzfb9WfBVZxRqiUSFvl7wLUp1pA2s1zCu9uk JtnLkbhtpCFsQ== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Tariq Toukan , Gal Pressman , Saeed Mahameed Subject: [net-next 14/15] net/mlx5e: Statify function mlx5_cmd_trigger_completions Date: Fri, 18 Mar 2022 13:52:47 -0700 Message-Id: <20220318205248.33367-15-saeed@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220318205248.33367-1-saeed@kernel.org> References: <20220318205248.33367-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Tariq Toukan Starting from commit 4cab346bcf74 ("net/mlx5: No command allowed when command interface is not ready"), no calls to mlx5_cmd_trigger_completions() are external to cmd.c anymore. Make it a static function. Signed-off-by: Tariq Toukan Reviewed-by: Gal Pressman Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/cmd.c | 2 +- drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h | 1 - 2 files changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c index c2462d37f1b3..f1329f83f988 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c @@ -1720,7 +1720,7 @@ static void mlx5_cmd_comp_handler(struct mlx5_core_dev *dev, u64 vec, bool force } } -void mlx5_cmd_trigger_completions(struct mlx5_core_dev *dev) +static void mlx5_cmd_trigger_completions(struct mlx5_core_dev *dev) { struct mlx5_cmd *cmd = &dev->cmd; unsigned long bitmask; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h index 6f8baa0f2a73..3b231e90111d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h @@ -176,7 +176,6 @@ int mlx5_destroy_scheduling_element_cmd(struct mlx5_core_dev *dev, u8 hierarchy, u32 element_id); int mlx5_wait_for_pages(struct mlx5_core_dev *dev, int *pages); -void mlx5_cmd_trigger_completions(struct mlx5_core_dev *dev); void mlx5_cmd_flush(struct mlx5_core_dev *dev); void mlx5_cq_debugfs_init(struct mlx5_core_dev *dev); void mlx5_cq_debugfs_cleanup(struct mlx5_core_dev *dev); From patchwork Fri Mar 18 20:52:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12785939 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4AD65C433EF for ; Fri, 18 Mar 2022 20:53:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240915AbiCRUy5 (ORCPT ); Fri, 18 Mar 2022 16:54:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45034 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240878AbiCRUym (ORCPT ); Fri, 18 Mar 2022 16:54:42 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B83DE13E2E for ; Fri, 18 Mar 2022 13:53:19 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 8D6D0B8257E for ; Fri, 18 Mar 2022 20:53:18 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 005FCC340EF; Fri, 18 Mar 2022 20:53:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1647636797; bh=QnoMYRrvfTzxFiV4EcFLNJO5D+TqfjgLW6giPyzJsnA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pItPBLtku3N9fAYbGz35NN8t/MSb81h6Nl2G/WUxkkHhlXAzRQii9cPSo2PRo1ReG kDPG5+tn9OUg0uP47W/ned52NphaWitsX0EmskhwsnKPIwKIZr2fY9LAwxnM7BNdjM bBjzqXR4t6xARgSjIhcYy0kX2ex4ADJffyDdwnp5H/LDVeQUb4PVjvqY13f+Yrq07x FTxRHPnHdN82NGWOhWefW8ZOKmKKfpnXUhybuT8GsQ2kPFTzCclgU/nGrs/8C6GTTx pUcTex8Vyer7OSrI0b6jWSxyXGiZCrDpFgnzURLMU6+BPX8XLqm1UiOU2OcteVD0Ux FOeo0YZuQq9QQ== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Saeed Mahameed , Moshe Tal , Maxim Mikityanskiy , Tariq Toukan Subject: [net-next 15/15] net/mlx5e: HTB, remove unused function declaration Date: Fri, 18 Mar 2022 13:52:48 -0700 Message-Id: <20220318205248.33367-16-saeed@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220318205248.33367-1-saeed@kernel.org> References: <20220318205248.33367-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Saeed Mahameed There is no function mlx5e_get_sq(), remove the declaration. Signed-off-by: Saeed Mahameed Signed-off-by: Moshe Tal Reviewed-by: Maxim Mikityanskiy Reviewed-by: Tariq Toukan --- drivers/net/ethernet/mellanox/mlx5/core/en/qos.h | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/qos.h b/drivers/net/ethernet/mellanox/mlx5/core/en/qos.h index b7558907ba20..5d9bd91d86c2 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/qos.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/qos.h @@ -18,7 +18,6 @@ int mlx5e_qos_cur_leaf_nodes(struct mlx5e_priv *priv); /* TX datapath API */ int mlx5e_get_txq_by_classid(struct mlx5e_priv *priv, u16 classid); -struct mlx5e_txqsq *mlx5e_get_sq(struct mlx5e_priv *priv, int qid); /* SQ lifecycle */ int mlx5e_qos_open_queues(struct mlx5e_priv *priv, struct mlx5e_channels *chs);