From patchwork Sun Mar 20 19:57:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Chan X-Patchwork-Id: 12786676 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A370AC433F5 for ; Sun, 20 Mar 2022 19:58:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343616AbiCTT7u (ORCPT ); Sun, 20 Mar 2022 15:59:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33610 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241597AbiCTT7s (ORCPT ); Sun, 20 Mar 2022 15:59:48 -0400 Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0A8F034B82 for ; Sun, 20 Mar 2022 12:58:25 -0700 (PDT) Received: by mail-pj1-x1036.google.com with SMTP id mr5-20020a17090b238500b001c67366ae93so10219478pjb.4 for ; Sun, 20 Mar 2022 12:58:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=xupxeyOEVEDhKgImj+YBP0hX9Ushh8A7cl8JeAM5rKU=; b=Eo0qWh4qk9Sykr87QPfc5nIrJ7Hi4aZjas5gZPrKGI8RqSpEiORcFYfiMluHm5ydsl okEEegre5wV8zdYw2OWQaA1Sk/9Q1m0MkDrXFzj3YachiL/e6o3vYE8YUtwB2DVcm5oR 8Wpg8a7Reoz9ImkBCBD+36sqmIfzJqfTnW7fo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=xupxeyOEVEDhKgImj+YBP0hX9Ushh8A7cl8JeAM5rKU=; b=1cqMQVAKJD6y/1oJDjH8os6eJXWdPzctSLfhXEQ6GqjSc7OTXILEkwlqrro9iSuDP/ nnMuu/wZvTtlMqsMEU6fT+j8kU6zWWrWF0Jn8mZAft3ZEnFIVOE+oSjalptPj9j0w99y RLeeTM3yEl6C0NtGcn6L5l5LXjJZP2l6kpoHCmb88zTjJEnp0d+932fcGsj4rhlR0Zg3 O1lVpra0evoxtiy4rcoe+5zVjBdYqMSTk/B6yZHd6qMcTWo1AShBBcwvcVReUoP+gf1e pmpqWYgn/Am0VXn4GDa+Xjb6aHqKKgR2mWaqV/w5xG85LEGN+FXWrj/GUd3aQDoBouUT +WLw== X-Gm-Message-State: AOAM5322Uv7jDpmrXiRugk/kipQjJpoY0AxE5lM1S5dfUZcOm5U8RpN6 9cgAHY4Z0ELe5xuqd4Izm/gOVQ== X-Google-Smtp-Source: ABdhPJwB2z30Bw4gVFhbETtQtzSeeGqZ5c0NihP4SLApbXRN8BmDGggCa72KUd1Ya0KVuyAL4pq7zw== X-Received: by 2002:a17:90a:5a86:b0:1bf:7860:c0f6 with SMTP id n6-20020a17090a5a8600b001bf7860c0f6mr33042861pji.213.1647806304275; Sun, 20 Mar 2022 12:58:24 -0700 (PDT) Received: from localhost.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id h10-20020a056a001a4a00b004f7c76f29c3sm16418335pfv.24.2022.03.20.12.58.23 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 20 Mar 2022 12:58:23 -0700 (PDT) From: Michael Chan To: davem@davemloft.net Cc: netdev@vger.kernel.org, kuba@kernel.org, gospo@broadcom.com Subject: [PATCH net-next v2 01/11] bnxt: refactor bnxt_rx_xdp to separate xdp_init_buff/xdp_prepare_buff Date: Sun, 20 Mar 2022 15:57:54 -0400 Message-Id: <1647806284-8529-2-git-send-email-michael.chan@broadcom.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1647806284-8529-1-git-send-email-michael.chan@broadcom.com> References: <1647806284-8529-1-git-send-email-michael.chan@broadcom.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Andy Gospodarek Move initialization of xdp_buff outside of bnxt_rx_xdp to prepare for allowing bnxt_rx_xdp to operate on multibuffer xdp_buffs. v2: Fix uninitalized variables warning in bnxt_xdp.c. Signed-off-by: Andy Gospodarek Signed-off-by: Michael Chan --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 11 +++-- drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c | 46 ++++++++++++++----- drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h | 7 ++- 3 files changed, 48 insertions(+), 16 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 92a1a43b3bee..21d5c76b1e70 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -1731,6 +1731,7 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr, u8 *data_ptr, agg_bufs, cmp_type; dma_addr_t dma_addr; struct sk_buff *skb; + struct xdp_buff xdp; u32 flags, misc; void *data; int rc = 0; @@ -1839,11 +1840,13 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr, len = flags >> RX_CMP_LEN_SHIFT; dma_addr = rx_buf->mapping; - if (bnxt_rx_xdp(bp, rxr, cons, data, &data_ptr, &len, event)) { - rc = 1; - goto next_rx; + if (bnxt_xdp_attached(bp, rxr)) { + bnxt_xdp_buff_init(bp, rxr, cons, &data_ptr, &len, &xdp); + if (bnxt_rx_xdp(bp, rxr, cons, xdp, data, &len, event)) { + rc = 1; + goto next_rx; + } } - if (len <= bp->rx_copy_thresh) { skb = bnxt_copy_skb(bnapi, data_ptr, len, dma_addr); bnxt_reuse_rx_data(rxr, cons, data); diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c index 52fad0fdeacf..55bd4b835ce3 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c @@ -104,18 +104,44 @@ void bnxt_tx_int_xdp(struct bnxt *bp, struct bnxt_napi *bnapi, int nr_pkts) } } +bool bnxt_xdp_attached(struct bnxt *bp, struct bnxt_rx_ring_info *rxr) +{ + struct bpf_prog *xdp_prog = READ_ONCE(rxr->xdp_prog); + + return !!xdp_prog; +} + +void bnxt_xdp_buff_init(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, + u16 cons, u8 **data_ptr, unsigned int *len, + struct xdp_buff *xdp) +{ + struct bnxt_sw_rx_bd *rx_buf; + struct pci_dev *pdev; + dma_addr_t mapping; + u32 offset; + + pdev = bp->pdev; + rx_buf = &rxr->rx_buf_ring[cons]; + offset = bp->rx_offset; + + mapping = rx_buf->mapping - bp->rx_dma_offset; + dma_sync_single_for_cpu(&pdev->dev, mapping + offset, *len, bp->rx_dir); + + xdp_init_buff(xdp, PAGE_SIZE, &rxr->xdp_rxq); + xdp_prepare_buff(xdp, *data_ptr - offset, offset, *len, false); +} + /* returns the following: * true - packet consumed by XDP and new buffer is allocated. * false - packet should be passed to the stack. */ bool bnxt_rx_xdp(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, u16 cons, - struct page *page, u8 **data_ptr, unsigned int *len, u8 *event) + struct xdp_buff xdp, struct page *page, unsigned int *len, u8 *event) { struct bpf_prog *xdp_prog = READ_ONCE(rxr->xdp_prog); struct bnxt_tx_ring_info *txr; struct bnxt_sw_rx_bd *rx_buf; struct pci_dev *pdev; - struct xdp_buff xdp; dma_addr_t mapping; void *orig_data; u32 tx_avail; @@ -126,16 +152,10 @@ bool bnxt_rx_xdp(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, u16 cons, return false; pdev = bp->pdev; - rx_buf = &rxr->rx_buf_ring[cons]; offset = bp->rx_offset; - mapping = rx_buf->mapping - bp->rx_dma_offset; - dma_sync_single_for_cpu(&pdev->dev, mapping + offset, *len, bp->rx_dir); - txr = rxr->bnapi->tx_ring; /* BNXT_RX_PAGE_MODE(bp) when XDP enabled */ - xdp_init_buff(&xdp, PAGE_SIZE, &rxr->xdp_rxq); - xdp_prepare_buff(&xdp, *data_ptr - offset, offset, *len, false); orig_data = xdp.data; act = bpf_prog_run_xdp(xdp_prog, &xdp); @@ -148,15 +168,17 @@ bool bnxt_rx_xdp(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, u16 cons, *event &= ~BNXT_RX_EVENT; *len = xdp.data_end - xdp.data; - if (orig_data != xdp.data) { + if (orig_data != xdp.data) offset = xdp.data - xdp.data_hard_start; - *data_ptr = xdp.data_hard_start + offset; - } + switch (act) { case XDP_PASS: return false; case XDP_TX: + rx_buf = &rxr->rx_buf_ring[cons]; + mapping = rx_buf->mapping - bp->rx_dma_offset; + if (tx_avail < 1) { trace_xdp_exception(bp->dev, xdp_prog, act); bnxt_reuse_rx_data(rxr, cons, page); @@ -175,6 +197,8 @@ bool bnxt_rx_xdp(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, u16 cons, * redirect is coming from a frame received by the * bnxt_en driver. */ + rx_buf = &rxr->rx_buf_ring[cons]; + mapping = rx_buf->mapping - bp->rx_dma_offset; dma_unmap_page_attrs(&pdev->dev, mapping, PAGE_SIZE, bp->rx_dir, DMA_ATTR_WEAK_ORDERING); diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h index 0df40c3beb05..39690bdb5526 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h @@ -15,10 +15,15 @@ struct bnxt_sw_tx_bd *bnxt_xmit_bd(struct bnxt *bp, dma_addr_t mapping, u32 len); void bnxt_tx_int_xdp(struct bnxt *bp, struct bnxt_napi *bnapi, int nr_pkts); bool bnxt_rx_xdp(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, u16 cons, - struct page *page, u8 **data_ptr, unsigned int *len, + struct xdp_buff xdp, struct page *page, unsigned int *len, u8 *event); int bnxt_xdp(struct net_device *dev, struct netdev_bpf *xdp); int bnxt_xdp_xmit(struct net_device *dev, int num_frames, struct xdp_frame **frames, u32 flags); +bool bnxt_xdp_attached(struct bnxt *bp, struct bnxt_rx_ring_info *rxr); + +void bnxt_xdp_buff_init(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, + u16 cons, u8 **data_ptr, unsigned int *len, + struct xdp_buff *xdp); #endif From patchwork Sun Mar 20 19:57:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Chan X-Patchwork-Id: 12786677 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 154EAC433EF for ; Sun, 20 Mar 2022 19:58:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343624AbiCTT74 (ORCPT ); Sun, 20 Mar 2022 15:59:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33684 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343612AbiCTT7t (ORCPT ); Sun, 20 Mar 2022 15:59:49 -0400 Received: from mail-pj1-x102c.google.com (mail-pj1-x102c.google.com [IPv6:2607:f8b0:4864:20::102c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5A48834666 for ; Sun, 20 Mar 2022 12:58:26 -0700 (PDT) Received: by mail-pj1-x102c.google.com with SMTP id n7-20020a17090aab8700b001c6aa871860so6624281pjq.2 for ; Sun, 20 Mar 2022 12:58:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=b3T5k9pbqsFN2prQf906cYSu78f4XZFiBwUhmvbzzGw=; b=QVZHfzs4fGmLnXqZLrJn5IphA2mriMn0k7F1lRLYQld/DgyLRde03zA3pOPScA6iE3 JMi71mV3z5F2C14P7XgJ+NZuKDMOdOStRLFTRCNiCBXqj8KYlocywmbFfgK/83Q/6C3Z O9zD7RmEvVmyEqc3XGnMe8TGMjivAmc2fEzes= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=b3T5k9pbqsFN2prQf906cYSu78f4XZFiBwUhmvbzzGw=; b=OD9mgwPftUC/80KnKiyPz4lWN+HXUy9UNQYPY61NmnXou9YtM9hg3hKgPkobq3vS/q Wk3xuSraPROsmfsJh4vwCUKI2BMUxErneP6wsUNC4wjYsyav2LSZWhqh4dM0XVvP/VCx Ifv6GQUBDO4BV0B/nwVE2qN9Gtf3HJ2NtUzU0CENbr0/mDYXYmp3y01htKVSsRuqIZu0 sydlXg3IssXbbhUkGQeiFj79RuFGiyYz+3VTUrNOue0gQCz1nfwNY6u1MtR6NHbBq3x4 OrR/fIh6/4Kb3jEWCYOtHIbVhRsd0zWtmOWGk6PIjIuR88YPNR+AcOpyjSSF98i16a9y VsDw== X-Gm-Message-State: AOAM532PWk9/lNulgyXJ56dGssP783o9AxI9GY/ZZRoGkjt1hLpJmwoA eqJfP21ui9pAhNzqFBuGRd81uA== X-Google-Smtp-Source: ABdhPJzc14qB9AkCQZyDVUYfip7rbk3vAQjILxvCqpGprsd9OjSTJvSC6EdAv6WiQwuKphuw8s+BsA== X-Received: by 2002:a17:90a:1b65:b0:1c6:5bc5:99b4 with SMTP id q92-20020a17090a1b6500b001c65bc599b4mr25618649pjq.177.1647806305319; Sun, 20 Mar 2022 12:58:25 -0700 (PDT) Received: from localhost.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id h10-20020a056a001a4a00b004f7c76f29c3sm16418335pfv.24.2022.03.20.12.58.24 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 20 Mar 2022 12:58:24 -0700 (PDT) From: Michael Chan To: davem@davemloft.net Cc: netdev@vger.kernel.org, kuba@kernel.org, gospo@broadcom.com Subject: [PATCH net-next v2 02/11] bnxt: add flag to denote that an xdp program is currently attached Date: Sun, 20 Mar 2022 15:57:55 -0400 Message-Id: <1647806284-8529-3-git-send-email-michael.chan@broadcom.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1647806284-8529-1-git-send-email-michael.chan@broadcom.com> References: <1647806284-8529-1-git-send-email-michael.chan@broadcom.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Andy Gospodarek This will be used to determine if bnxt_rx_xdp should be called rather than calling it every time. Signed-off-by: Andy Gospodarek Signed-off-by: Michael Chan --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 21d5c76b1e70..b7d7ee775fdc 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -1729,6 +1729,7 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr, struct bnxt_sw_rx_bd *rx_buf; unsigned int len; u8 *data_ptr, agg_bufs, cmp_type; + bool xdp_active = false; dma_addr_t dma_addr; struct sk_buff *skb; struct xdp_buff xdp; @@ -1842,11 +1843,17 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr, if (bnxt_xdp_attached(bp, rxr)) { bnxt_xdp_buff_init(bp, rxr, cons, &data_ptr, &len, &xdp); + xdp_active = true; + } + + /* skip running XDP prog if there are aggregation bufs */ + if (!agg_bufs && xdp_active) { if (bnxt_rx_xdp(bp, rxr, cons, xdp, data, &len, event)) { rc = 1; goto next_rx; } } + if (len <= bp->rx_copy_thresh) { skb = bnxt_copy_skb(bnapi, data_ptr, len, dma_addr); bnxt_reuse_rx_data(rxr, cons, data); From patchwork Sun Mar 20 19:57:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Chan X-Patchwork-Id: 12786680 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22BC1C433F5 for ; Sun, 20 Mar 2022 19:58:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343628AbiCTT75 (ORCPT ); Sun, 20 Mar 2022 15:59:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33790 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239942AbiCTT7u (ORCPT ); Sun, 20 Mar 2022 15:59:50 -0400 Received: from mail-pf1-x432.google.com (mail-pf1-x432.google.com [IPv6:2607:f8b0:4864:20::432]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 162A534B97 for ; Sun, 20 Mar 2022 12:58:27 -0700 (PDT) Received: by mail-pf1-x432.google.com with SMTP id u22so2460890pfg.6 for ; Sun, 20 Mar 2022 12:58:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Y1AqHWcpTZf2YDZ0D4Pd3HGtsTfkO+PI49Z1lC6z1qw=; b=DByn3ElDxdxrOeHx1lyJcCAyZ1rtXphML1F3dOljlD4gtClQX6jxFVoQtMRJTiBwbZ S5MD8dkMfAI82XgLNLSjGMxZDmSP9PZexzfNZp/RU7jOvXP15ZUQQ349/76ytaWMY3f9 kGX4EJtS/a4kMlMtNN3erJXlh5MeA8rXFYMwI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Y1AqHWcpTZf2YDZ0D4Pd3HGtsTfkO+PI49Z1lC6z1qw=; b=5f2nv3ZiUCyqQ0qJdXNuLO1GNhmocLHJFYzMjY0WpIfE5tW+TdETO+fs9IUVgNKub9 G9XGRlc/rrLq4Z5QRz/r5XOUorD3Fx35Tm2oStFO1Ytx3lUGcTptxv0sSq8RcIF5Bx2M oLkGGZBG0QXVbaEJnKHQq73c7Szrsk2UTXBLVhjyjEsaqhcP4LRwFf8ijrWh9XDTVs3t wfAOMlmAtIo0JT4WgT87K129bd3sEXYtSVVibPlWEyTco0KlQOc2BYdUOUbJ2WMRLnPI 9edkVg3Uf8lxMB9lIxYBElxtSLFMvQYOxhK59nuNWOkLdkLl2OAjTMfqtcRDM4Y7Vq6k Mw6w== X-Gm-Message-State: AOAM531l6gCtmM4jEUsUNLsfWHc40H/L8WRq+Zlv3wt+LwZr7rRNRVMO MXC4EvG7covVCc+niVviMybULPJ8YvlMgQ== X-Google-Smtp-Source: ABdhPJwzc2GpCrqrf4SgEjVm9yhLjjyjj3t1wPzvsmpHR1/+de+GL1cisNvVEN6rdcVhsjbxH6cLIw== X-Received: by 2002:a05:6a00:b53:b0:4fa:6304:6337 with SMTP id p19-20020a056a000b5300b004fa63046337mr16573236pfo.1.1647806306333; Sun, 20 Mar 2022 12:58:26 -0700 (PDT) Received: from localhost.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id h10-20020a056a001a4a00b004f7c76f29c3sm16418335pfv.24.2022.03.20.12.58.25 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 20 Mar 2022 12:58:26 -0700 (PDT) From: Michael Chan To: davem@davemloft.net Cc: netdev@vger.kernel.org, kuba@kernel.org, gospo@broadcom.com Subject: [PATCH net-next v2 03/11] bnxt: refactor bnxt_rx_pages operate on skb_shared_info Date: Sun, 20 Mar 2022 15:57:56 -0400 Message-Id: <1647806284-8529-4-git-send-email-michael.chan@broadcom.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1647806284-8529-1-git-send-email-michael.chan@broadcom.com> References: <1647806284-8529-1-git-send-email-michael.chan@broadcom.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Andy Gospodarek Rather than operating on an sk_buff, add frags from the aggregation ring into the frags of an skb_shared_info. This will allow the caller to use either an sk_buff or xdp_buff. Signed-off-by: Andy Gospodarek Signed-off-by: Michael Chan --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 50 +++++++++++++++-------- 1 file changed, 33 insertions(+), 17 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index b7d7ee775fdc..ba01a353bb3f 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -1038,22 +1038,23 @@ static struct sk_buff *bnxt_rx_skb(struct bnxt *bp, return skb; } -static struct sk_buff *bnxt_rx_pages(struct bnxt *bp, - struct bnxt_cp_ring_info *cpr, - struct sk_buff *skb, u16 idx, - u32 agg_bufs, bool tpa) +static u32 __bnxt_rx_pages(struct bnxt *bp, + struct bnxt_cp_ring_info *cpr, + struct skb_shared_info *shinfo, + u16 idx, u32 agg_bufs, bool tpa) { struct bnxt_napi *bnapi = cpr->bnapi; struct pci_dev *pdev = bp->pdev; struct bnxt_rx_ring_info *rxr = bnapi->rx_ring; u16 prod = rxr->rx_agg_prod; + u32 i, total_frag_len = 0; bool p5_tpa = false; - u32 i; if ((bp->flags & BNXT_FLAG_CHIP_P5) && tpa) p5_tpa = true; for (i = 0; i < agg_bufs; i++) { + skb_frag_t *frag = &shinfo->frags[i]; u16 cons, frag_len; struct rx_agg_cmp *agg; struct bnxt_sw_rx_agg_bd *cons_rx_buf; @@ -1069,8 +1070,10 @@ static struct sk_buff *bnxt_rx_pages(struct bnxt *bp, RX_AGG_CMP_LEN) >> RX_AGG_CMP_LEN_SHIFT; cons_rx_buf = &rxr->rx_agg_ring[cons]; - skb_fill_page_desc(skb, i, cons_rx_buf->page, - cons_rx_buf->offset, frag_len); + skb_frag_off_set(frag, cons_rx_buf->offset); + skb_frag_size_set(frag, frag_len); + __skb_frag_set_page(frag, cons_rx_buf->page); + shinfo->nr_frags = i + 1; __clear_bit(cons, rxr->rx_agg_bmap); /* It is possible for bnxt_alloc_rx_page() to allocate @@ -1082,15 +1085,10 @@ static struct sk_buff *bnxt_rx_pages(struct bnxt *bp, cons_rx_buf->page = NULL; if (bnxt_alloc_rx_page(bp, rxr, prod, GFP_ATOMIC) != 0) { - struct skb_shared_info *shinfo; unsigned int nr_frags; - shinfo = skb_shinfo(skb); nr_frags = --shinfo->nr_frags; __skb_frag_set_page(&shinfo->frags[nr_frags], NULL); - - dev_kfree_skb(skb); - cons_rx_buf->page = page; /* Update prod since possibly some pages have been @@ -1098,20 +1096,38 @@ static struct sk_buff *bnxt_rx_pages(struct bnxt *bp, */ rxr->rx_agg_prod = prod; bnxt_reuse_rx_agg_bufs(cpr, idx, i, agg_bufs - i, tpa); - return NULL; + return 0; } dma_unmap_page_attrs(&pdev->dev, mapping, BNXT_RX_PAGE_SIZE, DMA_FROM_DEVICE, DMA_ATTR_WEAK_ORDERING); - skb->data_len += frag_len; - skb->len += frag_len; - skb->truesize += PAGE_SIZE; - + total_frag_len += frag_len; prod = NEXT_RX_AGG(prod); } rxr->rx_agg_prod = prod; + return total_frag_len; +} + +static struct sk_buff *bnxt_rx_pages(struct bnxt *bp, + struct bnxt_cp_ring_info *cpr, + struct sk_buff *skb, u16 idx, + u32 agg_bufs, bool tpa) +{ + struct skb_shared_info *shinfo = skb_shinfo(skb); + u32 total_frag_len = 0; + + total_frag_len = __bnxt_rx_pages(bp, cpr, shinfo, idx, agg_bufs, tpa); + + if (!total_frag_len) { + dev_kfree_skb(skb); + return NULL; + } + + skb->data_len += total_frag_len; + skb->len += total_frag_len; + skb->truesize += PAGE_SIZE * agg_bufs; return skb; } From patchwork Sun Mar 20 19:57:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Chan X-Patchwork-Id: 12786678 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 02BD9C433FE for ; Sun, 20 Mar 2022 19:58:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343626AbiCTT76 (ORCPT ); Sun, 20 Mar 2022 15:59:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33872 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343617AbiCTT7v (ORCPT ); Sun, 20 Mar 2022 15:59:51 -0400 Received: from mail-pl1-x629.google.com (mail-pl1-x629.google.com [IPv6:2607:f8b0:4864:20::629]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4B4A83587D for ; Sun, 20 Mar 2022 12:58:28 -0700 (PDT) Received: by mail-pl1-x629.google.com with SMTP id d18so11065035plr.6 for ; Sun, 20 Mar 2022 12:58:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=XxtAdW+2OkMWPZEGxjK4r8Kg0+TC8CcVDbvHSsFo7sE=; b=IeUNOHSuASevcetRh3GZfVT/DZXAn0j7/gpJb7qs5ivwTxKfvYBtcsCObOt8JYApsY dPdrz6TcE+Ws5vYpXpr3RPestmUQUe2RbVzCiCI8Tf+Tjwotu9wnpiOBepNj60M8f+Ac KmUwO5IQdz8g5oS9LbM2vkjz4pTMNeTb/+/30= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=XxtAdW+2OkMWPZEGxjK4r8Kg0+TC8CcVDbvHSsFo7sE=; b=4IBsUZlzZh6aoPdW2AoaVHl1h+mGUk0UQAeaqgVa28vgKFQB3vAiKvNz5T/TLfvtVr sJpSaDU4osQfmgtri/zndQzQT2KJP99K60J2ixk1gHiGn0nnNOShtZjNzEhpR9mx6d94 2/z13X68lmG0F+PS7ZaMIOtzrbE6aJBd2pa6kiSxUKEK40V9Sdhco7FdRG47nV0zH5xP 4hVfXO2Cqza5Ou2Lfjp3xgbLqcibcuqumKOYuch/SFzG2DQ0oiCsraJssUK04Io96mMd rpMjunPih1Hun75G+PC0mFvR60ZlswvDFL+Jqb4IzgCkAVtJ+STpT+8feGoemiOCY3/7 i8ig== X-Gm-Message-State: AOAM532TIwAzEWwkcIlzz097+xa/b5V+Bgay0QrQRG0NxWllHiXSWVmE yhf/tTqnf9m/x0AM+kVmw1KSVDEghR9dMw== X-Google-Smtp-Source: ABdhPJwCIjs/3uufaFqGcoKQpYF4+SF5mO78sL4DJI3+ZlEZurtjoAkjZkZfuFFfhrpEL7YxJr+/SQ== X-Received: by 2002:a17:90a:7304:b0:1c6:aadc:90e5 with SMTP id m4-20020a17090a730400b001c6aadc90e5mr15617288pjk.164.1647806307434; Sun, 20 Mar 2022 12:58:27 -0700 (PDT) Received: from localhost.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id h10-20020a056a001a4a00b004f7c76f29c3sm16418335pfv.24.2022.03.20.12.58.26 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 20 Mar 2022 12:58:26 -0700 (PDT) From: Michael Chan To: davem@davemloft.net Cc: netdev@vger.kernel.org, kuba@kernel.org, gospo@broadcom.com Subject: [PATCH net-next v2 04/11] bnxt: rename bnxt_rx_pages to bnxt_rx_agg_pages_skb Date: Sun, 20 Mar 2022 15:57:57 -0400 Message-Id: <1647806284-8529-5-git-send-email-michael.chan@broadcom.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1647806284-8529-1-git-send-email-michael.chan@broadcom.com> References: <1647806284-8529-1-git-send-email-michael.chan@broadcom.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Andy Gospodarek Clarify that this is reading buffers from the aggregation ring. Signed-off-by: Andy Gospodarek Signed-off-by: Michael Chan --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index ba01a353bb3f..3324d0070667 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -1038,10 +1038,10 @@ static struct sk_buff *bnxt_rx_skb(struct bnxt *bp, return skb; } -static u32 __bnxt_rx_pages(struct bnxt *bp, - struct bnxt_cp_ring_info *cpr, - struct skb_shared_info *shinfo, - u16 idx, u32 agg_bufs, bool tpa) +static u32 __bnxt_rx_agg_pages(struct bnxt *bp, + struct bnxt_cp_ring_info *cpr, + struct skb_shared_info *shinfo, + u16 idx, u32 agg_bufs, bool tpa) { struct bnxt_napi *bnapi = cpr->bnapi; struct pci_dev *pdev = bp->pdev; @@ -1110,15 +1110,15 @@ static u32 __bnxt_rx_pages(struct bnxt *bp, return total_frag_len; } -static struct sk_buff *bnxt_rx_pages(struct bnxt *bp, - struct bnxt_cp_ring_info *cpr, - struct sk_buff *skb, u16 idx, - u32 agg_bufs, bool tpa) +static struct sk_buff *bnxt_rx_agg_pages_skb(struct bnxt *bp, + struct bnxt_cp_ring_info *cpr, + struct sk_buff *skb, u16 idx, + u32 agg_bufs, bool tpa) { struct skb_shared_info *shinfo = skb_shinfo(skb); u32 total_frag_len = 0; - total_frag_len = __bnxt_rx_pages(bp, cpr, shinfo, idx, agg_bufs, tpa); + total_frag_len = __bnxt_rx_agg_pages(bp, cpr, shinfo, idx, agg_bufs, tpa); if (!total_frag_len) { dev_kfree_skb(skb); @@ -1660,7 +1660,7 @@ static inline struct sk_buff *bnxt_tpa_end(struct bnxt *bp, } if (agg_bufs) { - skb = bnxt_rx_pages(bp, cpr, skb, idx, agg_bufs, true); + skb = bnxt_rx_agg_pages_skb(bp, cpr, skb, idx, agg_bufs, true); if (!skb) { /* Page reuse already handled by bnxt_rx_pages(). */ cpr->sw_stats.rx.rx_oom_discards += 1; @@ -1898,7 +1898,7 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr, } if (agg_bufs) { - skb = bnxt_rx_pages(bp, cpr, skb, cp_cons, agg_bufs, false); + skb = bnxt_rx_agg_pages_skb(bp, cpr, skb, cp_cons, agg_bufs, false); if (!skb) { cpr->sw_stats.rx.rx_oom_discards += 1; rc = -ENOMEM; From patchwork Sun Mar 20 19:57:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Chan X-Patchwork-Id: 12786681 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3250DC4332F for ; Sun, 20 Mar 2022 19:58:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343617AbiCTT77 (ORCPT ); Sun, 20 Mar 2022 15:59:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33960 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343618AbiCTT7w (ORCPT ); Sun, 20 Mar 2022 15:59:52 -0400 Received: from mail-pg1-x535.google.com (mail-pg1-x535.google.com [IPv6:2607:f8b0:4864:20::535]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4F04436163 for ; Sun, 20 Mar 2022 12:58:29 -0700 (PDT) Received: by mail-pg1-x535.google.com with SMTP id e6so8904511pgn.2 for ; Sun, 20 Mar 2022 12:58:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=as+Gor1YPld9R1CaP7xwp+25DitgFSo4wUsy11/ueo0=; b=JrLNWycgcQPcIo0UyDp+x6h4R4uPY1EMvvX4xUuKqfNaugQKNCm9wM04RIq10ZLhjQ FyADRCCWRtJlczONkY+KzTooEE3dAURDTLhrQWg6uu7s8vb0W0xqqX1pbdS3UEroEFoK mmFM3WH2KnIaljL01/eyAdLc/N9GpyCFIBchw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=as+Gor1YPld9R1CaP7xwp+25DitgFSo4wUsy11/ueo0=; b=3P48xEwUF3B6LpUYXwCYY6sj+r7YqucYU13ssjoQSaPLLXoB5HF389Lj1570/Mwgld QgSVb6ZxTuy8fTtUUTis9+LfVK2ztYtcQMPvpPTyh+nE0tFxxhXtxyuzVpVZoMNV3NnN 1xXxay7HwFre0rUJuaJxmUCfR+cEY+PA1Jl3i74YPfyNTXcieKnwKKkDHV3sOW5XeH8T zs9ltnzwSnegh2THSk8xHW0kKszwyBtZfiMNGOkqILkXJS/i9mRhShxBLXYRUC78dZzs HBccopcH+LWdaQyHWtCt8wkODLj1aq6XaOdBqHnKg5UrN1FJ/kmzP5qA4a08IRcLF9p0 uhRA== X-Gm-Message-State: AOAM533OWmAAaZjARXibU6iRNCyFmvMjO95hKskb6eJx7kbJXmkPE1ab jrkmuTDxaQbCBiXSl4nF1atbcA== X-Google-Smtp-Source: ABdhPJxSHhvp7Zd8N3hy2oBfCuRgM+SvXZ85li4pm6m7u58nMSFXywkYR48obOw788a6e/lX2lpfBQ== X-Received: by 2002:a05:6a00:2284:b0:4f7:86a3:6f6 with SMTP id f4-20020a056a00228400b004f786a306f6mr20497233pfe.20.1647806308308; Sun, 20 Mar 2022 12:58:28 -0700 (PDT) Received: from localhost.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id h10-20020a056a001a4a00b004f7c76f29c3sm16418335pfv.24.2022.03.20.12.58.27 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 20 Mar 2022 12:58:28 -0700 (PDT) From: Michael Chan To: davem@davemloft.net Cc: netdev@vger.kernel.org, kuba@kernel.org, gospo@broadcom.com Subject: [PATCH net-next v2 05/11] bnxt: adding bnxt_rx_agg_pages_xdp for aggregated xdp Date: Sun, 20 Mar 2022 15:57:58 -0400 Message-Id: <1647806284-8529-6-git-send-email-michael.chan@broadcom.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1647806284-8529-1-git-send-email-michael.chan@broadcom.com> References: <1647806284-8529-1-git-send-email-michael.chan@broadcom.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Andy Gospodarek This patch adds a new function that will read pages from the aggregation ring and create an xdp_buff with frags based on the entries in the aggregation ring. Signed-off-by: Andy Gospodarek Signed-off-by: Michael Chan --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 31 +++++++++++++++++++++++ 1 file changed, 31 insertions(+) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 3324d0070667..4f42efeddb32 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -1131,6 +1131,27 @@ static struct sk_buff *bnxt_rx_agg_pages_skb(struct bnxt *bp, return skb; } +static u32 bnxt_rx_agg_pages_xdp(struct bnxt *bp, + struct bnxt_cp_ring_info *cpr, + struct xdp_buff *xdp, u16 idx, + u32 agg_bufs, bool tpa) +{ + struct skb_shared_info *shinfo = xdp_get_shared_info_from_buff(xdp); + u32 total_frag_len = 0; + + if (!xdp_buff_has_frags(xdp)) + shinfo->nr_frags = 0; + + total_frag_len = __bnxt_rx_agg_pages(bp, cpr, shinfo, idx, agg_bufs, tpa); + + if (total_frag_len) { + xdp_buff_set_frags_flag(xdp); + shinfo->nr_frags = agg_bufs; + shinfo->xdp_frags_size = total_frag_len; + } + return total_frag_len; +} + static int bnxt_agg_bufs_valid(struct bnxt *bp, struct bnxt_cp_ring_info *cpr, u8 agg_bufs, u32 *raw_cons) { @@ -1859,6 +1880,16 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr, if (bnxt_xdp_attached(bp, rxr)) { bnxt_xdp_buff_init(bp, rxr, cons, &data_ptr, &len, &xdp); + if (agg_bufs) { + u32 frag_len = bnxt_rx_agg_pages_xdp(bp, cpr, &xdp, + cp_cons, agg_bufs, + false); + if (!frag_len) { + cpr->sw_stats.rx.rx_oom_discards += 1; + rc = -ENOMEM; + goto next_rx; + } + } xdp_active = true; } From patchwork Sun Mar 20 19:57:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Chan X-Patchwork-Id: 12786679 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E710DC433EF for ; Sun, 20 Mar 2022 19:58:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239942AbiCTT75 (ORCPT ); Sun, 20 Mar 2022 15:59:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34138 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343620AbiCTT7z (ORCPT ); Sun, 20 Mar 2022 15:59:55 -0400 Received: from mail-pg1-x529.google.com (mail-pg1-x529.google.com [IPv6:2607:f8b0:4864:20::529]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 674BE36691 for ; Sun, 20 Mar 2022 12:58:30 -0700 (PDT) Received: by mail-pg1-x529.google.com with SMTP id o8so8915436pgf.9 for ; Sun, 20 Mar 2022 12:58:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=dS2qQBt77o79gHfSP6/4iapkBmHSrCallbezE0k2Q5E=; b=ciqVtE8pnpdnNMCsP0KIzeYXsfWS4TEOOJn82PJ4zpDh2AqQjY/d1ETEZ6+WDvEGel DvCJIckaie5mfSupv1Dp1QP3uLe3qUfRR2PsIh7PneURdUeKKqqSEGsaKsDcJ066RsIy ctDk32itaLp+87NDDMRcKqPKdB2ojmbxdHHQo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=dS2qQBt77o79gHfSP6/4iapkBmHSrCallbezE0k2Q5E=; b=peG+6e0Yp0x8+VJ3GGhfEcS8FR1S1HsGdYd3siPTSsPaSrwHoY+8TW0EK/U4G+q+XH 3fGQC+tZMwTMOKRIwpgu9kARqFVUepXdVpizsCO4YQmLN3XsWk/xIu55PvftsigXJiTE GMaAXekJXynJd5eD8Zhq1bUXztKp6m7/13MnQjlZXH+ycQvXPcsLvFz9QqcwFQl+9nxF IkDFH7I9QuY4RxgDvbR3CdG8vZUiJd8dGGrgKQE12z5u3uE3gYGIdy7nM6J9hynXr5Xw 58I/VMBxTG0IRmKtOxUiCyCvspRvZwq6ju6XxefC8k5gXBtirMBjCxaTMD0LvC1w/WCH Dl5Q== X-Gm-Message-State: AOAM532Porg44E8PrFaVzrfLd/CGfOpEG2nMocljKHJ9imiJjzfH6Dw4 7NKT3kPMogwfvMytVzbvds9Zrw== X-Google-Smtp-Source: ABdhPJzg59nGh4zWK6gyaa1CONlXUkZ2wSbr3KALz+M+NrgENfQpC4Z1XlN2aCC+cP3L7JTjjOpBjg== X-Received: by 2002:a05:6a00:140a:b0:4e0:54d5:d01 with SMTP id l10-20020a056a00140a00b004e054d50d01mr20058062pfu.20.1647806309301; Sun, 20 Mar 2022 12:58:29 -0700 (PDT) Received: from localhost.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id h10-20020a056a001a4a00b004f7c76f29c3sm16418335pfv.24.2022.03.20.12.58.28 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 20 Mar 2022 12:58:28 -0700 (PDT) From: Michael Chan To: davem@davemloft.net Cc: netdev@vger.kernel.org, kuba@kernel.org, gospo@broadcom.com Subject: [PATCH net-next v2 06/11] bnxt: set xdp_buff pfmemalloc flag if needed Date: Sun, 20 Mar 2022 15:57:59 -0400 Message-Id: <1647806284-8529-7-git-send-email-michael.chan@broadcom.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1647806284-8529-1-git-send-email-michael.chan@broadcom.com> References: <1647806284-8529-1-git-send-email-michael.chan@broadcom.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Andy Gospodarek Set the pfmemaloc flag in the xdp buff so that this can be copied to the skb if needed for an XDP_PASS action. Signed-off-by: Andy Gospodarek Signed-off-by: Michael Chan --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 4f42efeddb32..05f4b3fbf2e3 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -1041,7 +1041,8 @@ static struct sk_buff *bnxt_rx_skb(struct bnxt *bp, static u32 __bnxt_rx_agg_pages(struct bnxt *bp, struct bnxt_cp_ring_info *cpr, struct skb_shared_info *shinfo, - u16 idx, u32 agg_bufs, bool tpa) + u16 idx, u32 agg_bufs, bool tpa, + struct xdp_buff *xdp) { struct bnxt_napi *bnapi = cpr->bnapi; struct pci_dev *pdev = bp->pdev; @@ -1084,6 +1085,9 @@ static u32 __bnxt_rx_agg_pages(struct bnxt *bp, page = cons_rx_buf->page; cons_rx_buf->page = NULL; + if (xdp && page_is_pfmemalloc(page)) + xdp_buff_set_frag_pfmemalloc(xdp); + if (bnxt_alloc_rx_page(bp, rxr, prod, GFP_ATOMIC) != 0) { unsigned int nr_frags; @@ -1118,8 +1122,8 @@ static struct sk_buff *bnxt_rx_agg_pages_skb(struct bnxt *bp, struct skb_shared_info *shinfo = skb_shinfo(skb); u32 total_frag_len = 0; - total_frag_len = __bnxt_rx_agg_pages(bp, cpr, shinfo, idx, agg_bufs, tpa); - + total_frag_len = __bnxt_rx_agg_pages(bp, cpr, shinfo, idx, + agg_bufs, tpa, NULL); if (!total_frag_len) { dev_kfree_skb(skb); return NULL; @@ -1142,8 +1146,8 @@ static u32 bnxt_rx_agg_pages_xdp(struct bnxt *bp, if (!xdp_buff_has_frags(xdp)) shinfo->nr_frags = 0; - total_frag_len = __bnxt_rx_agg_pages(bp, cpr, shinfo, idx, agg_bufs, tpa); - + total_frag_len = __bnxt_rx_agg_pages(bp, cpr, shinfo, + idx, agg_bufs, tpa, xdp); if (total_frag_len) { xdp_buff_set_frags_flag(xdp); shinfo->nr_frags = agg_bufs; From patchwork Sun Mar 20 19:58:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Chan X-Patchwork-Id: 12786682 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 554D1C43219 for ; Sun, 20 Mar 2022 19:58:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343637AbiCTUAA (ORCPT ); Sun, 20 Mar 2022 16:00:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34140 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343621AbiCTT7z (ORCPT ); Sun, 20 Mar 2022 15:59:55 -0400 Received: from mail-pg1-x529.google.com (mail-pg1-x529.google.com [IPv6:2607:f8b0:4864:20::529]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1BCF636B46 for ; Sun, 20 Mar 2022 12:58:31 -0700 (PDT) Received: by mail-pg1-x529.google.com with SMTP id c2so8898569pga.10 for ; Sun, 20 Mar 2022 12:58:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=vXwKl0hJ8UxJdPWcuV1zuggAEwb0Wv4WszfUSBBYtoE=; b=Gg1iB2pGlcRAPd+ks3oa6q0gfeTfwCVFbx0ObD4jQ+N8xmYKKP30ICMaSTrBEmicOp eqE+pfS/aViSlfpI03ZnQKtYnVwlK86yYfugqDcBf+6oXpoSoJoa5fR+kqps7KJemcSP nJHez2w6wwcqhTsoTz0zaDjZn6ijQMm+xu3hE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=vXwKl0hJ8UxJdPWcuV1zuggAEwb0Wv4WszfUSBBYtoE=; b=w2R3hqggZIB/t7vFMLfLAttFW+FbpCPDPXqgl7Mab6WAmOcZIw5/PSI6ZAg+oDqCfM LIPtKlRine3HZYBWiQ9JExVq6cyCY/uxPdxQEe0efAqaHWrFWuCbJdjp7TfUvzt4GUCx SH4PrvO9vkNGRXaR+O6pEH/l1tb05QOeMnsebafxmPGSpCeY7btPAXCOXmE+UXI/qDDX nWp0bHvLXXEOm51M5XZN8XCsnAHaxLNk9Afa1V5ib0BzeCN7FjXzh4YG4/MxdqP7+vCX lqif1P+uKPaliiydHxiInYVp4CyiqHuH3eOHP6971yNTRqZfCPrqshtKgST0sBRtcmF4 H6ow== X-Gm-Message-State: AOAM531co1xeMSKBGa4YDes/O68UluwDVay2xxef2MopyX/lkT7JjC0a UIjt7MkbN31u2lsFBo1yCpzxZA== X-Google-Smtp-Source: ABdhPJxhTrkBbg7CT4Jv68P37TsD6P/YTK++qy3RHATEpaE265PsOCO8o5ItJTU+5CARwysgXX802g== X-Received: by 2002:a05:6a00:2295:b0:4f7:9bf8:57cc with SMTP id f21-20020a056a00229500b004f79bf857ccmr19914633pfe.79.1647806310416; Sun, 20 Mar 2022 12:58:30 -0700 (PDT) Received: from localhost.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id h10-20020a056a001a4a00b004f7c76f29c3sm16418335pfv.24.2022.03.20.12.58.29 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 20 Mar 2022 12:58:30 -0700 (PDT) From: Michael Chan To: davem@davemloft.net Cc: netdev@vger.kernel.org, kuba@kernel.org, gospo@broadcom.com Subject: [PATCH net-next v2 07/11] bnxt: change receive ring space parameters Date: Sun, 20 Mar 2022 15:58:00 -0400 Message-Id: <1647806284-8529-8-git-send-email-michael.chan@broadcom.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1647806284-8529-1-git-send-email-michael.chan@broadcom.com> References: <1647806284-8529-1-git-send-email-michael.chan@broadcom.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Andy Gospodarek Modify ring header data split and jumbo parameters to account for the fact that the design for XDP multibuffer puts close to the first 4k of data in a page and the remaining portions of the packet go in the aggregation ring. Signed-off-by: Andy Gospodarek Signed-off-by: Michael Chan --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 44 +++++++++++++++-------- drivers/net/ethernet/broadcom/bnxt/bnxt.h | 1 + 2 files changed, 30 insertions(+), 15 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 05f4b3fbf2e3..b635b7ce6ba3 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -56,6 +56,7 @@ #include #include #include +#include #include "bnxt_hsi.h" #include "bnxt.h" @@ -1933,11 +1934,13 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr, } if (agg_bufs) { - skb = bnxt_rx_agg_pages_skb(bp, cpr, skb, cp_cons, agg_bufs, false); - if (!skb) { - cpr->sw_stats.rx.rx_oom_discards += 1; - rc = -ENOMEM; - goto next_rx; + if (!xdp_active) { + skb = bnxt_rx_agg_pages_skb(bp, cpr, skb, cp_cons, agg_bufs, false); + if (!skb) { + cpr->sw_stats.rx.rx_oom_discards += 1; + rc = -ENOMEM; + goto next_rx; + } } } @@ -3853,7 +3856,7 @@ void bnxt_set_ring_params(struct bnxt *bp) /* 8 for CRC and VLAN */ rx_size = SKB_DATA_ALIGN(bp->dev->mtu + ETH_HLEN + NET_IP_ALIGN + 8); - rx_space = rx_size + NET_SKB_PAD + + rx_space = rx_size + ALIGN(max(NET_SKB_PAD, XDP_PACKET_HEADROOM), 8) + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); bp->rx_copy_thresh = BNXT_RX_COPY_THRESH; @@ -3894,9 +3897,17 @@ void bnxt_set_ring_params(struct bnxt *bp) } bp->rx_agg_ring_size = agg_ring_size; bp->rx_agg_ring_mask = (bp->rx_agg_nr_pages * RX_DESC_CNT) - 1; - rx_size = SKB_DATA_ALIGN(BNXT_RX_COPY_THRESH + NET_IP_ALIGN); - rx_space = rx_size + NET_SKB_PAD + - SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); + + if (BNXT_RX_PAGE_MODE(bp)) { + rx_space = PAGE_SIZE; + rx_size = rx_space - SKB_DATA_ALIGN(ETH_HLEN + NET_IP_ALIGN + 8) - + ALIGN(max(NET_SKB_PAD, XDP_PACKET_HEADROOM), 8) - + 2 * SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); + } else { + rx_size = SKB_DATA_ALIGN(BNXT_RX_COPY_THRESH + NET_IP_ALIGN); + rx_space = rx_size + NET_SKB_PAD + + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); + } } bp->rx_buf_use_size = rx_size; @@ -5286,12 +5297,15 @@ static int bnxt_hwrm_vnic_set_hds(struct bnxt *bp, u16 vnic_id) if (rc) return rc; - req->flags = cpu_to_le32(VNIC_PLCMODES_CFG_REQ_FLAGS_JUMBO_PLACEMENT | - VNIC_PLCMODES_CFG_REQ_FLAGS_HDS_IPV4 | - VNIC_PLCMODES_CFG_REQ_FLAGS_HDS_IPV6); - req->enables = - cpu_to_le32(VNIC_PLCMODES_CFG_REQ_ENABLES_JUMBO_THRESH_VALID | - VNIC_PLCMODES_CFG_REQ_ENABLES_HDS_THRESHOLD_VALID); + req->flags = cpu_to_le32(VNIC_PLCMODES_CFG_REQ_FLAGS_JUMBO_PLACEMENT); + req->enables = cpu_to_le32(VNIC_PLCMODES_CFG_REQ_ENABLES_JUMBO_THRESH_VALID); + + if (BNXT_RX_PAGE_MODE(bp) && !BNXT_RX_JUMBO_MODE(bp)) { + req->flags |= cpu_to_le32(VNIC_PLCMODES_CFG_REQ_FLAGS_HDS_IPV4 | + VNIC_PLCMODES_CFG_REQ_FLAGS_HDS_IPV6); + req->enables |= + cpu_to_le32(VNIC_PLCMODES_CFG_REQ_ENABLES_HDS_THRESHOLD_VALID); + } /* thresholds not implemented in firmware yet */ req->jumbo_thresh = cpu_to_le16(bp->rx_copy_thresh); req->hds_threshold = cpu_to_le16(bp->rx_copy_thresh); diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h index 447a9406b8a2..9e2dabb58519 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -1814,6 +1814,7 @@ struct bnxt { #define BNXT_SUPPORTS_TPA(bp) (!BNXT_CHIP_TYPE_NITRO_A0(bp) && \ (!((bp)->flags & BNXT_FLAG_CHIP_P5) || \ (bp)->max_tpa_v2) && !is_kdump_kernel()) +#define BNXT_RX_JUMBO_MODE(bp) ((bp)->flags & BNXT_FLAG_JUMBO) #define BNXT_CHIP_SR2(bp) \ ((bp)->chip_num == CHIP_NUM_58818) From patchwork Sun Mar 20 19:58:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Chan X-Patchwork-Id: 12786683 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 556DAC433EF for ; Sun, 20 Mar 2022 19:58:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343638AbiCTUAC (ORCPT ); Sun, 20 Mar 2022 16:00:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34194 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343627AbiCTT74 (ORCPT ); Sun, 20 Mar 2022 15:59:56 -0400 Received: from mail-pg1-x52e.google.com (mail-pg1-x52e.google.com [IPv6:2607:f8b0:4864:20::52e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8473C36E25 for ; Sun, 20 Mar 2022 12:58:32 -0700 (PDT) Received: by mail-pg1-x52e.google.com with SMTP id t187so8904143pgb.1 for ; Sun, 20 Mar 2022 12:58:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Nkux6K8D2N0KzkeTeO7biUJw+8VehVM2FBf9QijE0w8=; b=X4XJn2MlFK9ZoXoL23zBknpMxflUXksLWUqSpWtvt5DcDOjcxmyrvitFc7e1e9fnFU b7nnBb6hLKi0JYvk3dbA2xrOwZd0MblQv0nhhRkAHbiDV/DqB4XrVP4DVmeDhSZIwsio kYzKkS6Q0TdHGTYCKpDU4DyzaZ4LJgt6sdeF4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Nkux6K8D2N0KzkeTeO7biUJw+8VehVM2FBf9QijE0w8=; b=tgCRsU99JXN1qHYChPP3R5fYBXnjHEEEEplrM6d6WrErLW6v8ICC1JVMKLTegYA75k whw9zbsPbnJNdLgMfvyvh7r0H1t7UTmfXbpwyw6TzZKwUcisnk47HnJtr7Zb/tNUqkZn Vntz5XWPdlBEnONwtyCMK/qtxgZ9GP/ScYNXGYtbqU/YHUtcemCY+QWLQP59o6ga8ELP KZbBgq3s/YvHIHV3SiMl+jyyQLU8PujI+/hIq+IitFe4zcVQ1mopegnCNrMRwNGIYdHx 4aIMamC/N+OSJSeDm+/Orj0yIb6/ubFcdf8nSDAM1DJaOM73GSrpLBIlEbs6vVWXmKaN u+hQ== X-Gm-Message-State: AOAM533MH2Uqi2sCzRItCyFdvirDjH2i1nBWzOq4SleZx8IneVIOiYvt jgA0gFZtKB4K6bQkmbMlKvA48yQvVsANZg== X-Google-Smtp-Source: ABdhPJx1V2YJbpF8PLkSAoyCajTzW4TkY37it/3BeoqcXNlec8pavlHr3NMMYoAjT9I9hHYtxSrj/w== X-Received: by 2002:a05:6a00:140c:b0:4e1:530c:edc0 with SMTP id l12-20020a056a00140c00b004e1530cedc0mr20437566pfu.18.1647806311620; Sun, 20 Mar 2022 12:58:31 -0700 (PDT) Received: from localhost.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id h10-20020a056a001a4a00b004f7c76f29c3sm16418335pfv.24.2022.03.20.12.58.30 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 20 Mar 2022 12:58:31 -0700 (PDT) From: Michael Chan To: davem@davemloft.net Cc: netdev@vger.kernel.org, kuba@kernel.org, gospo@broadcom.com Subject: [PATCH net-next v2 08/11] bnxt: add page_pool support for aggregation ring when using xdp Date: Sun, 20 Mar 2022 15:58:01 -0400 Message-Id: <1647806284-8529-9-git-send-email-michael.chan@broadcom.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1647806284-8529-1-git-send-email-michael.chan@broadcom.com> References: <1647806284-8529-1-git-send-email-michael.chan@broadcom.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Andy Gospodarek If we are using aggregation rings with XDP enabled, allocate page buffers for the aggregation rings from the page_pool. Signed-off-by: Andy Gospodarek Signed-off-by: Michael Chan --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 77 ++++++++++++++--------- 1 file changed, 47 insertions(+), 30 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index b635b7ce6ba3..980c176d7c88 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -739,7 +739,6 @@ static struct page *__bnxt_alloc_rx_page(struct bnxt *bp, dma_addr_t *mapping, page_pool_recycle_direct(rxr->page_pool, page); return NULL; } - *mapping += bp->rx_dma_offset; return page; } @@ -781,6 +780,7 @@ int bnxt_alloc_rx_data(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, if (!page) return -ENOMEM; + mapping += bp->rx_dma_offset; rx_buf->data = page; rx_buf->data_ptr = page_address(page) + bp->rx_offset; } else { @@ -841,33 +841,41 @@ static inline int bnxt_alloc_rx_page(struct bnxt *bp, u16 sw_prod = rxr->rx_sw_agg_prod; unsigned int offset = 0; - if (PAGE_SIZE > BNXT_RX_PAGE_SIZE) { - page = rxr->rx_page; - if (!page) { + if (BNXT_RX_PAGE_MODE(bp)) { + page = __bnxt_alloc_rx_page(bp, &mapping, rxr, gfp); + + if (!page) + return -ENOMEM; + + } else { + if (PAGE_SIZE > BNXT_RX_PAGE_SIZE) { + page = rxr->rx_page; + if (!page) { + page = alloc_page(gfp); + if (!page) + return -ENOMEM; + rxr->rx_page = page; + rxr->rx_page_offset = 0; + } + offset = rxr->rx_page_offset; + rxr->rx_page_offset += BNXT_RX_PAGE_SIZE; + if (rxr->rx_page_offset == PAGE_SIZE) + rxr->rx_page = NULL; + else + get_page(page); + } else { page = alloc_page(gfp); if (!page) return -ENOMEM; - rxr->rx_page = page; - rxr->rx_page_offset = 0; } - offset = rxr->rx_page_offset; - rxr->rx_page_offset += BNXT_RX_PAGE_SIZE; - if (rxr->rx_page_offset == PAGE_SIZE) - rxr->rx_page = NULL; - else - get_page(page); - } else { - page = alloc_page(gfp); - if (!page) - return -ENOMEM; - } - mapping = dma_map_page_attrs(&pdev->dev, page, offset, - BNXT_RX_PAGE_SIZE, DMA_FROM_DEVICE, - DMA_ATTR_WEAK_ORDERING); - if (dma_mapping_error(&pdev->dev, mapping)) { - __free_page(page); - return -EIO; + mapping = dma_map_page_attrs(&pdev->dev, page, offset, + BNXT_RX_PAGE_SIZE, DMA_FROM_DEVICE, + DMA_ATTR_WEAK_ORDERING); + if (dma_mapping_error(&pdev->dev, mapping)) { + __free_page(page); + return -EIO; + } } if (unlikely(test_bit(sw_prod, rxr->rx_agg_bmap))) @@ -1105,7 +1113,7 @@ static u32 __bnxt_rx_agg_pages(struct bnxt *bp, } dma_unmap_page_attrs(&pdev->dev, mapping, BNXT_RX_PAGE_SIZE, - DMA_FROM_DEVICE, + bp->rx_dir, DMA_ATTR_WEAK_ORDERING); total_frag_len += frag_len; @@ -2936,14 +2944,23 @@ static void bnxt_free_one_rx_ring_skbs(struct bnxt *bp, int ring_nr) if (!page) continue; - dma_unmap_page_attrs(&pdev->dev, rx_agg_buf->mapping, - BNXT_RX_PAGE_SIZE, DMA_FROM_DEVICE, - DMA_ATTR_WEAK_ORDERING); + if (BNXT_RX_PAGE_MODE(bp)) { + dma_unmap_page_attrs(&pdev->dev, rx_agg_buf->mapping, + BNXT_RX_PAGE_SIZE, bp->rx_dir, + DMA_ATTR_WEAK_ORDERING); + rx_agg_buf->page = NULL; + __clear_bit(i, rxr->rx_agg_bmap); - rx_agg_buf->page = NULL; - __clear_bit(i, rxr->rx_agg_bmap); + page_pool_recycle_direct(rxr->page_pool, page); + } else { + dma_unmap_page_attrs(&pdev->dev, rx_agg_buf->mapping, + BNXT_RX_PAGE_SIZE, DMA_FROM_DEVICE, + DMA_ATTR_WEAK_ORDERING); + rx_agg_buf->page = NULL; + __clear_bit(i, rxr->rx_agg_bmap); - __free_page(page); + __free_page(page); + } } skip_rx_agg_free: From patchwork Sun Mar 20 19:58:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Chan X-Patchwork-Id: 12786684 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 666BAC433FE for ; Sun, 20 Mar 2022 19:58:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343627AbiCTUAC (ORCPT ); Sun, 20 Mar 2022 16:00:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34222 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343630AbiCTT75 (ORCPT ); Sun, 20 Mar 2022 15:59:57 -0400 Received: from mail-pf1-x42f.google.com (mail-pf1-x42f.google.com [IPv6:2607:f8b0:4864:20::42f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DEA5C36163 for ; Sun, 20 Mar 2022 12:58:33 -0700 (PDT) Received: by mail-pf1-x42f.google.com with SMTP id t2so13819180pfj.10 for ; Sun, 20 Mar 2022 12:58:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=aiS02Toi9n+uS9TmqtXWr4inL+SHUYUZAvxfzr8dj2c=; b=eta8yUHBiD7ywfYj8L9MdcO4JovYiQjbeDauzPR1qZBjHRVwa28fZWmkct37fHDBEd an8M98csxvJBUHkMHOanc2rQE4pFGIVgy8POI/iI03U1UhNw5T+OZ+laGOJZ2HkCrQxF enM3BncXmMI/tk1H8p11KW8+ziIKvrtt3+foI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=aiS02Toi9n+uS9TmqtXWr4inL+SHUYUZAvxfzr8dj2c=; b=uO3nGj/vF4bOKW6ZNTniC37WdfrkixGTjE+h1mdZmQ6DDsXOOkapiAjjnK3PswSNme NOD1yEfOtR3kQDFpQDCj4frRFrgA8X2EQvlLYTjHedJ9jrW7+MHWDWsOvx0zaStq4CIL 9BhNd84U6mTqgz13dLLDWfK62J5JlpqPbxAL+WfVPOf3a83IuAt+MlBWulow37t4sJtI 8h6yA+Af1NfaWYu7MQgambVQ7FyILrtPyccn3o5ZtxIVgtU61PdXmvLjpYoGqSMXwhaY 3UPGsfoZB4i+IgCM7QBAT3C3yiTiRuz2D6yQdpLV6KeS9Q9P7fpsdXB2NUWZLGgFofDN 0Ucg== X-Gm-Message-State: AOAM532A/Dd5jh4noSz2GGTkS/yaw4yn1SKL+Iw5SFUX/JvaneMif6YO GX7lJQcoVS2nFks8aCW69Y3jHA== X-Google-Smtp-Source: ABdhPJzl+n74ibAYbwmnsWTcpyKII+YfwYr3tuvK/Rxjc4AbB2nRWxU6OhDcV1PIplXHfez56G8S3Q== X-Received: by 2002:a05:6a00:80f:b0:4fa:9bd6:1cd3 with SMTP id m15-20020a056a00080f00b004fa9bd61cd3mr2610113pfk.57.1647806312768; Sun, 20 Mar 2022 12:58:32 -0700 (PDT) Received: from localhost.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id h10-20020a056a001a4a00b004f7c76f29c3sm16418335pfv.24.2022.03.20.12.58.31 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 20 Mar 2022 12:58:32 -0700 (PDT) From: Michael Chan To: davem@davemloft.net Cc: netdev@vger.kernel.org, kuba@kernel.org, gospo@broadcom.com Subject: [PATCH net-next v2 09/11] bnxt: adding bnxt_xdp_build_skb to build skb from multibuffer xdp_buff Date: Sun, 20 Mar 2022 15:58:02 -0400 Message-Id: <1647806284-8529-10-git-send-email-michael.chan@broadcom.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1647806284-8529-1-git-send-email-michael.chan@broadcom.com> References: <1647806284-8529-1-git-send-email-michael.chan@broadcom.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Andy Gospodarek Since we have an xdp_buff with frags there needs to be a way to convert that into a valid sk_buff in the event that XDP_PASS is the resulting operation. This adds a new rx_skb_func when the netdev has an MTU that prevents the packets from sitting in a single page. This also make sure that GRO/LRO stay disabled even when using the aggregation ring for large buffers. Signed-off-by: Andy Gospodarek Signed-off-by: Michael Chan --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 63 ++++++++++++++++--- drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c | 39 ++++++++++++ drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h | 3 + 3 files changed, 98 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 980c176d7c88..b92f5ef31132 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -971,6 +971,37 @@ static void bnxt_reuse_rx_agg_bufs(struct bnxt_cp_ring_info *cpr, u16 idx, rxr->rx_sw_agg_prod = sw_prod; } +static struct sk_buff *bnxt_rx_multi_page_skb(struct bnxt *bp, + struct bnxt_rx_ring_info *rxr, + u16 cons, void *data, u8 *data_ptr, + dma_addr_t dma_addr, + unsigned int offset_and_len) +{ + unsigned int len = offset_and_len & 0xffff; + struct page *page = data; + u16 prod = rxr->rx_prod; + struct sk_buff *skb; + int err; + + err = bnxt_alloc_rx_data(bp, rxr, prod, GFP_ATOMIC); + if (unlikely(err)) { + bnxt_reuse_rx_data(rxr, cons, data); + return NULL; + } + dma_addr -= bp->rx_dma_offset; + dma_unmap_page_attrs(&bp->pdev->dev, dma_addr, PAGE_SIZE, bp->rx_dir, + DMA_ATTR_WEAK_ORDERING); + skb = build_skb(page_address(page), PAGE_SIZE - bp->rx_dma_offset); + if (!skb) { + __free_page(page); + return NULL; + } + skb_mark_for_recycle(skb); + skb_reserve(skb, bp->rx_dma_offset); + __skb_put(skb, len); + return skb; +} + static struct sk_buff *bnxt_rx_page_skb(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, u16 cons, void *data, u8 *data_ptr, @@ -993,7 +1024,6 @@ static struct sk_buff *bnxt_rx_page_skb(struct bnxt *bp, dma_addr -= bp->rx_dma_offset; dma_unmap_page_attrs(&bp->pdev->dev, dma_addr, PAGE_SIZE, bp->rx_dir, DMA_ATTR_WEAK_ORDERING); - page_pool_release_page(rxr->page_pool, page); if (unlikely(!payload)) payload = eth_get_headlen(bp->dev, data_ptr, len); @@ -1004,6 +1034,7 @@ static struct sk_buff *bnxt_rx_page_skb(struct bnxt *bp, return NULL; } + skb_mark_for_recycle(skb); off = (void *)data_ptr - page_address(page); skb_add_rx_frag(skb, 0, page, off, len, PAGE_SIZE); memcpy(skb->data - NET_IP_ALIGN, data_ptr - NET_IP_ALIGN, @@ -1949,6 +1980,14 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr, rc = -ENOMEM; goto next_rx; } + } else { + skb = bnxt_xdp_build_skb(bp, skb, rxr->page_pool, &xdp, rxcmp1); + if (!skb) { + /* we should be able to free the old skb here */ + cpr->sw_stats.rx.rx_oom_discards += 1; + rc = -ENOMEM; + goto next_rx; + } } } @@ -3965,14 +4004,21 @@ void bnxt_set_ring_params(struct bnxt *bp) int bnxt_set_rx_skb_mode(struct bnxt *bp, bool page_mode) { if (page_mode) { - if (bp->dev->mtu > BNXT_MAX_PAGE_MODE_MTU) - return -EOPNOTSUPP; - bp->dev->max_mtu = - min_t(u16, bp->max_mtu, BNXT_MAX_PAGE_MODE_MTU); bp->flags &= ~BNXT_FLAG_AGG_RINGS; - bp->flags |= BNXT_FLAG_NO_AGG_RINGS | BNXT_FLAG_RX_PAGE_MODE; + bp->flags |= BNXT_FLAG_RX_PAGE_MODE; + + if (bp->dev->mtu > BNXT_MAX_PAGE_MODE_MTU) { + bp->flags |= BNXT_FLAG_JUMBO; + bp->rx_skb_func = bnxt_rx_multi_page_skb; + bp->dev->max_mtu = + min_t(u16, bp->max_mtu, BNXT_MAX_MTU); + } else { + bp->flags |= BNXT_FLAG_NO_AGG_RINGS; + bp->rx_skb_func = bnxt_rx_page_skb; + bp->dev->max_mtu = + min_t(u16, bp->max_mtu, BNXT_MAX_PAGE_MODE_MTU); + } bp->rx_dir = DMA_BIDIRECTIONAL; - bp->rx_skb_func = bnxt_rx_page_skb; /* Disable LRO or GRO_HW */ netdev_update_features(bp->dev); } else { @@ -11116,6 +11162,9 @@ static netdev_features_t bnxt_fix_features(struct net_device *dev, if (bp->flags & BNXT_FLAG_NO_AGG_RINGS) features &= ~(NETIF_F_LRO | NETIF_F_GRO_HW); + if (!(bp->flags & BNXT_FLAG_TPA)) + features &= ~(NETIF_F_LRO | NETIF_F_GRO_HW); + if (!(features & NETIF_F_GRO)) features &= ~NETIF_F_GRO_HW; diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c index 55bd4b835ce3..21302d88e909 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c @@ -351,3 +351,42 @@ int bnxt_xdp(struct net_device *dev, struct netdev_bpf *xdp) } return rc; } + +struct sk_buff * +bnxt_xdp_build_skb(struct bnxt *bp, struct sk_buff *skb, struct page_pool *pool, + struct xdp_buff *xdp, struct rx_cmp_ext *rxcmp1) +{ + struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp); + u8 num_frags, i; + + if (unlikely(xdp_buff_has_frags(xdp))) + num_frags = sinfo->nr_frags; + + if (!skb) + return NULL; + + skb_checksum_none_assert(skb); + if (RX_CMP_L4_CS_OK(rxcmp1)) { + if (bp->dev->features & NETIF_F_RXCSUM) { + skb->ip_summed = CHECKSUM_UNNECESSARY; + skb->csum_level = RX_CMP_ENCAP(rxcmp1); + } + } + + if (unlikely(xdp_buff_has_frags(xdp))) { + xdp_update_skb_shared_info(skb, sinfo->nr_frags, + sinfo->xdp_frags_size, + PAGE_SIZE * sinfo->nr_frags, + xdp_buff_is_frag_pfmemalloc(xdp)); + } + /* debug frags and number of frags */ + for (i = 0; i < num_frags; i++) { + skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; + + skb_frag_set_page(skb, i, skb_frag_page(&sinfo->frags[i])); + skb_frag_size_set(frag, skb_frag_size(&sinfo->frags[i])); + skb_frag_off_set(frag, skb_frag_off(&sinfo->frags[i])); + page_pool_release_page(pool, skb_frag_page(frag)); + } + return skb; +} diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h index 39690bdb5526..45134d299931 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h @@ -26,4 +26,7 @@ bool bnxt_xdp_attached(struct bnxt *bp, struct bnxt_rx_ring_info *rxr); void bnxt_xdp_buff_init(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, u16 cons, u8 **data_ptr, unsigned int *len, struct xdp_buff *xdp); +struct sk_buff *bnxt_xdp_build_skb(struct bnxt *bp, struct sk_buff *skb, + struct page_pool *pool, struct xdp_buff *xdp, + struct rx_cmp_ext *rxcmp1); #endif From patchwork Sun Mar 20 19:58:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Chan X-Patchwork-Id: 12786686 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DF018C433EF for ; Sun, 20 Mar 2022 19:58:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343640AbiCTUAJ (ORCPT ); Sun, 20 Mar 2022 16:00:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34394 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343634AbiCTT77 (ORCPT ); Sun, 20 Mar 2022 15:59:59 -0400 Received: from mail-pj1-x1034.google.com (mail-pj1-x1034.google.com [IPv6:2607:f8b0:4864:20::1034]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4A8EF37035 for ; Sun, 20 Mar 2022 12:58:35 -0700 (PDT) Received: by mail-pj1-x1034.google.com with SMTP id q1-20020a17090a4f8100b001c6575ae105so8919486pjh.0 for ; Sun, 20 Mar 2022 12:58:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=SUdpzk82VDIRDPH8e5q8ZV52v9UTo5IGYDTzbYMdWZA=; b=FOiqYp0kKZ2Nc3r1SQnd0NeqO2r6WvBiPZlz3YEcUXuJ5qbxIbEvuLvAUixPzrg6Y6 sUXC0kyJ4hinsoYRiJY/Wqb7Z+yncEbP7waJ0+ywKYxHDRl8ceOEGS2oPhoLsZajB4Sa TV6uG+Y8A988hYcolbtac34L5HYizCrox/4L4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=SUdpzk82VDIRDPH8e5q8ZV52v9UTo5IGYDTzbYMdWZA=; b=NF/Ex1j/EcJaS56SC/JPXtDbu8lIhlHutd4l8TIJce9IAiw8OiuWRpwF/RUuiGvqjr lEYfr1VBT1PpVMUZEz9Elg0T04EZXTBCUQ3ZHHNUqUB8z+9aJBWjxc7zBetepzyn5kjJ J98kL6K5tKHOoYyYY0VffedAUEQYAlHH2+jTJVs2zeeS7gn2BC6T3gwSHnh53qXABshp kTsgie61Nao2pkEZpDzsq5VU24B4TalqjlsnAwcx+F3NGqb9lDT8YeR3kf5HZAyC/tEM 0SJk4lIBi6ZEqut+Y99iSXwn42nMmoReIR8RZ2p7lbtEU+3nYsjUM8Gtc5NQ1DCwbE6V jOtA== X-Gm-Message-State: AOAM532x5dEnZpqSxBXtN1Mw0VNnhJvLJb6bYzgLrlTV+heI4b6GJCdE qlpTfWhPZy5NmeOD6BHVubbzyw== X-Google-Smtp-Source: ABdhPJyqcpFoch3C7I7NVFf1vK9jJ53xnw3mLNtH241ELZE+9Ti6KemXLxwVWYZ29E2jHkOjd43wzg== X-Received: by 2002:a17:90b:1b51:b0:1c6:f880:166d with SMTP id nv17-20020a17090b1b5100b001c6f880166dmr5263989pjb.185.1647806313845; Sun, 20 Mar 2022 12:58:33 -0700 (PDT) Received: from localhost.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id h10-20020a056a001a4a00b004f7c76f29c3sm16418335pfv.24.2022.03.20.12.58.32 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 20 Mar 2022 12:58:33 -0700 (PDT) From: Michael Chan To: davem@davemloft.net Cc: netdev@vger.kernel.org, kuba@kernel.org, gospo@broadcom.com Subject: [PATCH net-next v2 10/11] bnxt: support transmit and free of aggregation buffers Date: Sun, 20 Mar 2022 15:58:03 -0400 Message-Id: <1647806284-8529-11-git-send-email-michael.chan@broadcom.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1647806284-8529-1-git-send-email-michael.chan@broadcom.com> References: <1647806284-8529-1-git-send-email-michael.chan@broadcom.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Andy Gospodarek This patch adds the following features: - Support for XDP_TX and XDP_DROP action when using xdp_buff with frags - Support for freeing all frags attached to an xdp_buff - Cleanup of TX ring buffers after transmits complete - Slight change in definition of bnxt_sw_tx_bd since nr_frags and RX producer may both need to be used v2: Fix uninitialized variable warning in bnxt_xdp_buff_frags_free(). Signed-off-by: Andy Gospodarek Signed-off-by: Michael Chan --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 18 ++- drivers/net/ethernet/broadcom/bnxt/bnxt.h | 7 +- .../net/ethernet/broadcom/bnxt/bnxt_ethtool.c | 2 +- drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c | 114 ++++++++++++++++-- drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h | 5 +- 5 files changed, 123 insertions(+), 23 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index b92f5ef31132..84c89ee7dc2f 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -1949,9 +1949,13 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr, skb = bnxt_copy_skb(bnapi, data_ptr, len, dma_addr); bnxt_reuse_rx_data(rxr, cons, data); if (!skb) { - if (agg_bufs) - bnxt_reuse_rx_agg_bufs(cpr, cp_cons, 0, - agg_bufs, false); + if (agg_bufs) { + if (!xdp_active) + bnxt_reuse_rx_agg_bufs(cpr, cp_cons, 0, + agg_bufs, false); + else + bnxt_xdp_buff_frags_free(rxr, &xdp); + } cpr->sw_stats.rx.rx_oom_discards += 1; rc = -ENOMEM; goto next_rx; @@ -1984,6 +1988,7 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr, skb = bnxt_xdp_build_skb(bp, skb, rxr->page_pool, &xdp, rxcmp1); if (!skb) { /* we should be able to free the old skb here */ + bnxt_xdp_buff_frags_free(rxr, &xdp); cpr->sw_stats.rx.rx_oom_discards += 1; rc = -ENOMEM; goto next_rx; @@ -2603,10 +2608,13 @@ static void __bnxt_poll_work_done(struct bnxt *bp, struct bnxt_napi *bnapi) if ((bnapi->events & BNXT_RX_EVENT) && !(bnapi->in_reset)) { struct bnxt_rx_ring_info *rxr = bnapi->rx_ring; - if (bnapi->events & BNXT_AGG_EVENT) - bnxt_db_write(bp, &rxr->rx_agg_db, rxr->rx_agg_prod); bnxt_db_write(bp, &rxr->rx_db, rxr->rx_prod); } + if (bnapi->events & BNXT_AGG_EVENT) { + struct bnxt_rx_ring_info *rxr = bnapi->rx_ring; + + bnxt_db_write(bp, &rxr->rx_agg_db, rxr->rx_agg_prod); + } bnapi->events = 0; } diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h index 9e2dabb58519..801aa40f602f 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -698,13 +698,12 @@ struct bnxt_sw_tx_bd { }; DEFINE_DMA_UNMAP_ADDR(mapping); DEFINE_DMA_UNMAP_LEN(len); + struct page *page; u8 is_gso; u8 is_push; u8 action; - union { - unsigned short nr_frags; - u16 rx_prod; - }; + unsigned short nr_frags; + u16 rx_prod; }; struct bnxt_sw_rx_bd { diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c index 22e965e18fbc..b3a48d6675fe 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c @@ -3491,7 +3491,7 @@ static int bnxt_run_loopback(struct bnxt *bp) dev_kfree_skb(skb); return -EIO; } - bnxt_xmit_bd(bp, txr, map, pkt_size); + bnxt_xmit_bd(bp, txr, map, pkt_size, NULL); /* Sync BD data before updating doorbell */ wmb(); diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c index 21302d88e909..adbd92971209 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c @@ -22,36 +22,91 @@ struct bnxt_sw_tx_bd *bnxt_xmit_bd(struct bnxt *bp, struct bnxt_tx_ring_info *txr, - dma_addr_t mapping, u32 len) + dma_addr_t mapping, u32 len, + struct xdp_buff *xdp) { - struct bnxt_sw_tx_bd *tx_buf; + struct skb_shared_info *sinfo; + struct bnxt_sw_tx_bd *tx_buf, *first_buf; struct tx_bd *txbd; + int num_frags = 0; u32 flags; u16 prod; + int i; + + if (xdp && xdp_buff_has_frags(xdp)) { + sinfo = xdp_get_shared_info_from_buff(xdp); + num_frags = sinfo->nr_frags; + } + /* fill up the first buffer */ prod = txr->tx_prod; tx_buf = &txr->tx_buf_ring[prod]; + first_buf = tx_buf; + tx_buf->nr_frags = num_frags; + if (xdp) + tx_buf->page = virt_to_head_page(xdp->data); txbd = &txr->tx_desc_ring[TX_RING(prod)][TX_IDX(prod)]; - flags = (len << TX_BD_LEN_SHIFT) | (1 << TX_BD_FLAGS_BD_CNT_SHIFT) | - TX_BD_FLAGS_PACKET_END | bnxt_lhint_arr[len >> 9]; + flags = ((len) << TX_BD_LEN_SHIFT) | ((num_frags + 1) << TX_BD_FLAGS_BD_CNT_SHIFT); txbd->tx_bd_len_flags_type = cpu_to_le32(flags); txbd->tx_bd_opaque = prod; txbd->tx_bd_haddr = cpu_to_le64(mapping); + /* now let us fill up the frags into the next buffers */ + for (i = 0; i < num_frags ; i++) { + skb_frag_t *frag = &sinfo->frags[i]; + struct bnxt_sw_tx_bd *frag_tx_buf; + struct pci_dev *pdev = bp->pdev; + dma_addr_t frag_mapping; + int frag_len; + + prod = NEXT_TX(prod); + txr->tx_prod = prod; + + /* first fill up the first buffer */ + frag_tx_buf = &txr->tx_buf_ring[prod]; + frag_tx_buf->page = skb_frag_page(frag); + + txbd = &txr->tx_desc_ring[TX_RING(prod)][TX_IDX(prod)]; + + frag_len = skb_frag_size(frag); + frag_mapping = skb_frag_dma_map(&pdev->dev, frag, 0, + frag_len, DMA_TO_DEVICE); + + if (unlikely(dma_mapping_error(&pdev->dev, frag_mapping))) + return NULL; + + dma_unmap_addr_set(frag_tx_buf, mapping, frag_mapping); + + flags = frag_len << TX_BD_LEN_SHIFT; + txbd->tx_bd_len_flags_type = cpu_to_le32(flags); + txbd->tx_bd_opaque = prod; + txbd->tx_bd_haddr = cpu_to_le64(frag_mapping); + + len = frag_len; + } + + flags &= ~TX_BD_LEN; + txbd->tx_bd_len_flags_type = cpu_to_le32(((len) << TX_BD_LEN_SHIFT) | flags | + TX_BD_FLAGS_PACKET_END); + /* Sync TX BD */ + wmb(); prod = NEXT_TX(prod); txr->tx_prod = prod; - return tx_buf; + + return first_buf; } static void __bnxt_xmit_xdp(struct bnxt *bp, struct bnxt_tx_ring_info *txr, - dma_addr_t mapping, u32 len, u16 rx_prod) + dma_addr_t mapping, u32 len, u16 rx_prod, + struct xdp_buff *xdp) { struct bnxt_sw_tx_bd *tx_buf; - tx_buf = bnxt_xmit_bd(bp, txr, mapping, len); + tx_buf = bnxt_xmit_bd(bp, txr, mapping, len, xdp); tx_buf->rx_prod = rx_prod; tx_buf->action = XDP_TX; + } static void __bnxt_xmit_xdp_redirect(struct bnxt *bp, @@ -61,7 +116,7 @@ static void __bnxt_xmit_xdp_redirect(struct bnxt *bp, { struct bnxt_sw_tx_bd *tx_buf; - tx_buf = bnxt_xmit_bd(bp, txr, mapping, len); + tx_buf = bnxt_xmit_bd(bp, txr, mapping, len, NULL); tx_buf->action = XDP_REDIRECT; tx_buf->xdpf = xdpf; dma_unmap_addr_set(tx_buf, mapping, mapping); @@ -76,7 +131,7 @@ void bnxt_tx_int_xdp(struct bnxt *bp, struct bnxt_napi *bnapi, int nr_pkts) struct bnxt_sw_tx_bd *tx_buf; u16 tx_cons = txr->tx_cons; u16 last_tx_cons = tx_cons; - int i; + int i, j, frags; for (i = 0; i < nr_pkts; i++) { tx_buf = &txr->tx_buf_ring[tx_cons]; @@ -94,6 +149,13 @@ void bnxt_tx_int_xdp(struct bnxt *bp, struct bnxt_napi *bnapi, int nr_pkts) } else if (tx_buf->action == XDP_TX) { rx_doorbell_needed = true; last_tx_cons = tx_cons; + + frags = tx_buf->nr_frags; + for (j = 0; j < frags; j++) { + tx_cons = NEXT_TX(tx_cons); + tx_buf = &txr->tx_buf_ring[tx_cons]; + page_pool_recycle_direct(rxr->page_pool, tx_buf->page); + } } tx_cons = NEXT_TX(tx_cons); } @@ -101,6 +163,7 @@ void bnxt_tx_int_xdp(struct bnxt *bp, struct bnxt_napi *bnapi, int nr_pkts) if (rx_doorbell_needed) { tx_buf = &txr->tx_buf_ring[last_tx_cons]; bnxt_db_write(bp, &rxr->rx_db, tx_buf->rx_prod); + } } @@ -131,6 +194,20 @@ void bnxt_xdp_buff_init(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, xdp_prepare_buff(xdp, *data_ptr - offset, offset, *len, false); } +void bnxt_xdp_buff_frags_free(struct bnxt_rx_ring_info *rxr, + struct xdp_buff *xdp) +{ + struct skb_shared_info *shinfo = xdp_get_shared_info_from_buff(xdp); + int i; + + for (i = 0; i < shinfo->nr_frags; i++) { + struct page *page = skb_frag_page(&shinfo->frags[i]); + + page_pool_recycle_direct(rxr->page_pool, page); + } + shinfo->nr_frags = 0; +} + /* returns the following: * true - packet consumed by XDP and new buffer is allocated. * false - packet should be passed to the stack. @@ -143,6 +220,7 @@ bool bnxt_rx_xdp(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, u16 cons, struct bnxt_sw_rx_bd *rx_buf; struct pci_dev *pdev; dma_addr_t mapping; + u32 tx_needed = 1; void *orig_data; u32 tx_avail; u32 offset; @@ -178,18 +256,28 @@ bool bnxt_rx_xdp(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, u16 cons, case XDP_TX: rx_buf = &rxr->rx_buf_ring[cons]; mapping = rx_buf->mapping - bp->rx_dma_offset; + *event = 0; + + if (unlikely(xdp_buff_has_frags(&xdp))) { + struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(&xdp); - if (tx_avail < 1) { + tx_needed += sinfo->nr_frags; + *event = BNXT_AGG_EVENT; + } + + if (tx_avail < tx_needed) { trace_xdp_exception(bp->dev, xdp_prog, act); + bnxt_xdp_buff_frags_free(rxr, &xdp); bnxt_reuse_rx_data(rxr, cons, page); return true; } - *event = BNXT_TX_EVENT; dma_sync_single_for_device(&pdev->dev, mapping + offset, *len, bp->rx_dir); + + *event |= BNXT_TX_EVENT; __bnxt_xmit_xdp(bp, txr, mapping + offset, *len, - NEXT_RX(rxr->rx_prod)); + NEXT_RX(rxr->rx_prod), &xdp); bnxt_reuse_rx_data(rxr, cons, page); return true; case XDP_REDIRECT: @@ -206,6 +294,7 @@ bool bnxt_rx_xdp(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, u16 cons, /* if we are unable to allocate a new buffer, abort and reuse */ if (bnxt_alloc_rx_data(bp, rxr, rxr->rx_prod, GFP_ATOMIC)) { trace_xdp_exception(bp->dev, xdp_prog, act); + bnxt_xdp_buff_frags_free(rxr, &xdp); bnxt_reuse_rx_data(rxr, cons, page); return true; } @@ -225,6 +314,7 @@ bool bnxt_rx_xdp(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, u16 cons, trace_xdp_exception(bp->dev, xdp_prog, act); fallthrough; case XDP_DROP: + bnxt_xdp_buff_frags_free(rxr, &xdp); bnxt_reuse_rx_data(rxr, cons, page); break; } diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h index 45134d299931..8ac15184bcc8 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h @@ -12,7 +12,8 @@ struct bnxt_sw_tx_bd *bnxt_xmit_bd(struct bnxt *bp, struct bnxt_tx_ring_info *txr, - dma_addr_t mapping, u32 len); + dma_addr_t mapping, u32 len, + struct xdp_buff *xdp); void bnxt_tx_int_xdp(struct bnxt *bp, struct bnxt_napi *bnapi, int nr_pkts); bool bnxt_rx_xdp(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, u16 cons, struct xdp_buff xdp, struct page *page, unsigned int *len, @@ -26,6 +27,8 @@ bool bnxt_xdp_attached(struct bnxt *bp, struct bnxt_rx_ring_info *rxr); void bnxt_xdp_buff_init(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, u16 cons, u8 **data_ptr, unsigned int *len, struct xdp_buff *xdp); +void bnxt_xdp_buff_frags_free(struct bnxt_rx_ring_info *rxr, + struct xdp_buff *xdp); struct sk_buff *bnxt_xdp_build_skb(struct bnxt *bp, struct sk_buff *skb, struct page_pool *pool, struct xdp_buff *xdp, struct rx_cmp_ext *rxcmp1); From patchwork Sun Mar 20 19:58:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Chan X-Patchwork-Id: 12786685 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F39F8C433F5 for ; Sun, 20 Mar 2022 19:58:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241597AbiCTUAH (ORCPT ); Sun, 20 Mar 2022 16:00:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34370 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343629AbiCTT76 (ORCPT ); Sun, 20 Mar 2022 15:59:58 -0400 Received: from mail-pl1-x62d.google.com (mail-pl1-x62d.google.com [IPv6:2607:f8b0:4864:20::62d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7AE9C377F9 for ; Sun, 20 Mar 2022 12:58:35 -0700 (PDT) Received: by mail-pl1-x62d.google.com with SMTP id p17so11061671plo.9 for ; Sun, 20 Mar 2022 12:58:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=IY9O5Ln5D4Djtxg2fDyvR8ydsuyQNN+S2TvMoxDTeFU=; b=VHvtMk8K0m+Fbd/ozp/fGauJcGgzYPHAsRctWezzkqpxuK1Xx2iHJFJrEN+TcqRwBV roIXn/hZEGbJt5fXuoi7PySf60MDmrtPGu9fUwdx2j72GNu/VXrd58yfmXYXVs0IH4ul n30KwE8JbDccWW5p3zX47mHLPMplV4OQruRhI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=IY9O5Ln5D4Djtxg2fDyvR8ydsuyQNN+S2TvMoxDTeFU=; b=O+aMYXT++9OwCjvx6WfuhZkHfVl0iUkfaIhy2cX4UeChAusAYJCWZqLmjOxUN3pjM6 2eTY6GA98BnXjXfN6AN4RhH82Pe7PrM+y7GZaYky2FXPLRO8Vn0Eb2oFJi2M0zTxhS7N fT/vO5/qzHv6vNWp9NoW+SHMXsdHYjqCRLuIEZ6pE3nICqZhNZ3NjCiqOSPSY/1fxu5v SK5rI0QE2QF00vKxzXxYWNKr3iI51cxV6uS3veR58FrRCOK7o9hkTAvaa8SRwm+yGk6R gaXNzEwmd/0rJVDfO60NUtKuzhV4UQYrQaBmYxlwUasG6koA0oijW75alHKLNcQcghYP i21g== X-Gm-Message-State: AOAM532VXtGr4jq44QFRjDWSocHAjfTSPL+mFfR9R3IJqImcJm0QGZT3 /aVH7Fn3e/R9FxialJn/9AjFC0DJAgQ8Jw== X-Google-Smtp-Source: ABdhPJxNakUyUvb7Vpct5tGzFX2k4f1GAlzW0qMoFIYMKP0kFy8I0CgkbtT2oLcR2M80eOkNV3PIeg== X-Received: by 2002:a17:903:22c8:b0:153:b2f2:e7c0 with SMTP id y8-20020a17090322c800b00153b2f2e7c0mr9529512plg.135.1647806314818; Sun, 20 Mar 2022 12:58:34 -0700 (PDT) Received: from localhost.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id h10-20020a056a001a4a00b004f7c76f29c3sm16418335pfv.24.2022.03.20.12.58.34 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 20 Mar 2022 12:58:34 -0700 (PDT) From: Michael Chan To: davem@davemloft.net Cc: netdev@vger.kernel.org, kuba@kernel.org, gospo@broadcom.com Subject: [PATCH net-next v2 11/11] bnxt: XDP multibuffer enablement Date: Sun, 20 Mar 2022 15:58:04 -0400 Message-Id: <1647806284-8529-12-git-send-email-michael.chan@broadcom.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1647806284-8529-1-git-send-email-michael.chan@broadcom.com> References: <1647806284-8529-1-git-send-email-michael.chan@broadcom.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Andy Gospodarek Allow aggregation buffers to be in place in the receive path and allow XDP programs to be attached when using a larger than 4k MTU. Signed-off-by: Andy Gospodarek Signed-off-by: Michael Chan --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 3 +-- drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c | 5 ----- 2 files changed, 1 insertion(+), 7 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 84c89ee7dc2f..4f7213af1955 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -1937,8 +1937,7 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr, xdp_active = true; } - /* skip running XDP prog if there are aggregation bufs */ - if (!agg_bufs && xdp_active) { + if (xdp_active) { if (bnxt_rx_xdp(bp, rxr, cons, xdp, data, &len, event)) { rc = 1; goto next_rx; diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c index adbd92971209..3780b491a1d4 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c @@ -374,11 +374,6 @@ static int bnxt_xdp_set(struct bnxt *bp, struct bpf_prog *prog) int tx_xdp = 0, rc, tc; struct bpf_prog *old; - if (prog && bp->dev->mtu > BNXT_MAX_PAGE_MODE_MTU) { - netdev_warn(dev, "MTU %d larger than largest XDP supported MTU %d.\n", - bp->dev->mtu, BNXT_MAX_PAGE_MODE_MTU); - return -EOPNOTSUPP; - } if (!(bp->flags & BNXT_FLAG_SHARED_RINGS)) { netdev_warn(dev, "ethtool rx/tx channels must be combined to support XDP.\n"); return -EOPNOTSUPP;