From patchwork Sun Mar 20 19:58:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Chan X-Patchwork-Id: 12786683 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 556DAC433EF for ; Sun, 20 Mar 2022 19:58:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343638AbiCTUAC (ORCPT ); Sun, 20 Mar 2022 16:00:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34194 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343627AbiCTT74 (ORCPT ); Sun, 20 Mar 2022 15:59:56 -0400 Received: from mail-pg1-x52e.google.com (mail-pg1-x52e.google.com [IPv6:2607:f8b0:4864:20::52e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8473C36E25 for ; Sun, 20 Mar 2022 12:58:32 -0700 (PDT) Received: by mail-pg1-x52e.google.com with SMTP id t187so8904143pgb.1 for ; Sun, 20 Mar 2022 12:58:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Nkux6K8D2N0KzkeTeO7biUJw+8VehVM2FBf9QijE0w8=; b=X4XJn2MlFK9ZoXoL23zBknpMxflUXksLWUqSpWtvt5DcDOjcxmyrvitFc7e1e9fnFU b7nnBb6hLKi0JYvk3dbA2xrOwZd0MblQv0nhhRkAHbiDV/DqB4XrVP4DVmeDhSZIwsio kYzKkS6Q0TdHGTYCKpDU4DyzaZ4LJgt6sdeF4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Nkux6K8D2N0KzkeTeO7biUJw+8VehVM2FBf9QijE0w8=; b=tgCRsU99JXN1qHYChPP3R5fYBXnjHEEEEplrM6d6WrErLW6v8ICC1JVMKLTegYA75k whw9zbsPbnJNdLgMfvyvh7r0H1t7UTmfXbpwyw6TzZKwUcisnk47HnJtr7Zb/tNUqkZn Vntz5XWPdlBEnONwtyCMK/qtxgZ9GP/ScYNXGYtbqU/YHUtcemCY+QWLQP59o6ga8ELP KZbBgq3s/YvHIHV3SiMl+jyyQLU8PujI+/hIq+IitFe4zcVQ1mopegnCNrMRwNGIYdHx 4aIMamC/N+OSJSeDm+/Orj0yIb6/ubFcdf8nSDAM1DJaOM73GSrpLBIlEbs6vVWXmKaN u+hQ== X-Gm-Message-State: AOAM533MH2Uqi2sCzRItCyFdvirDjH2i1nBWzOq4SleZx8IneVIOiYvt jgA0gFZtKB4K6bQkmbMlKvA48yQvVsANZg== X-Google-Smtp-Source: ABdhPJx1V2YJbpF8PLkSAoyCajTzW4TkY37it/3BeoqcXNlec8pavlHr3NMMYoAjT9I9hHYtxSrj/w== X-Received: by 2002:a05:6a00:140c:b0:4e1:530c:edc0 with SMTP id l12-20020a056a00140c00b004e1530cedc0mr20437566pfu.18.1647806311620; Sun, 20 Mar 2022 12:58:31 -0700 (PDT) Received: from localhost.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id h10-20020a056a001a4a00b004f7c76f29c3sm16418335pfv.24.2022.03.20.12.58.30 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 20 Mar 2022 12:58:31 -0700 (PDT) From: Michael Chan To: davem@davemloft.net Cc: netdev@vger.kernel.org, kuba@kernel.org, gospo@broadcom.com Subject: [PATCH net-next v2 08/11] bnxt: add page_pool support for aggregation ring when using xdp Date: Sun, 20 Mar 2022 15:58:01 -0400 Message-Id: <1647806284-8529-9-git-send-email-michael.chan@broadcom.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1647806284-8529-1-git-send-email-michael.chan@broadcom.com> References: <1647806284-8529-1-git-send-email-michael.chan@broadcom.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Andy Gospodarek If we are using aggregation rings with XDP enabled, allocate page buffers for the aggregation rings from the page_pool. Signed-off-by: Andy Gospodarek Signed-off-by: Michael Chan --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 77 ++++++++++++++--------- 1 file changed, 47 insertions(+), 30 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index b635b7ce6ba3..980c176d7c88 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -739,7 +739,6 @@ static struct page *__bnxt_alloc_rx_page(struct bnxt *bp, dma_addr_t *mapping, page_pool_recycle_direct(rxr->page_pool, page); return NULL; } - *mapping += bp->rx_dma_offset; return page; } @@ -781,6 +780,7 @@ int bnxt_alloc_rx_data(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, if (!page) return -ENOMEM; + mapping += bp->rx_dma_offset; rx_buf->data = page; rx_buf->data_ptr = page_address(page) + bp->rx_offset; } else { @@ -841,33 +841,41 @@ static inline int bnxt_alloc_rx_page(struct bnxt *bp, u16 sw_prod = rxr->rx_sw_agg_prod; unsigned int offset = 0; - if (PAGE_SIZE > BNXT_RX_PAGE_SIZE) { - page = rxr->rx_page; - if (!page) { + if (BNXT_RX_PAGE_MODE(bp)) { + page = __bnxt_alloc_rx_page(bp, &mapping, rxr, gfp); + + if (!page) + return -ENOMEM; + + } else { + if (PAGE_SIZE > BNXT_RX_PAGE_SIZE) { + page = rxr->rx_page; + if (!page) { + page = alloc_page(gfp); + if (!page) + return -ENOMEM; + rxr->rx_page = page; + rxr->rx_page_offset = 0; + } + offset = rxr->rx_page_offset; + rxr->rx_page_offset += BNXT_RX_PAGE_SIZE; + if (rxr->rx_page_offset == PAGE_SIZE) + rxr->rx_page = NULL; + else + get_page(page); + } else { page = alloc_page(gfp); if (!page) return -ENOMEM; - rxr->rx_page = page; - rxr->rx_page_offset = 0; } - offset = rxr->rx_page_offset; - rxr->rx_page_offset += BNXT_RX_PAGE_SIZE; - if (rxr->rx_page_offset == PAGE_SIZE) - rxr->rx_page = NULL; - else - get_page(page); - } else { - page = alloc_page(gfp); - if (!page) - return -ENOMEM; - } - mapping = dma_map_page_attrs(&pdev->dev, page, offset, - BNXT_RX_PAGE_SIZE, DMA_FROM_DEVICE, - DMA_ATTR_WEAK_ORDERING); - if (dma_mapping_error(&pdev->dev, mapping)) { - __free_page(page); - return -EIO; + mapping = dma_map_page_attrs(&pdev->dev, page, offset, + BNXT_RX_PAGE_SIZE, DMA_FROM_DEVICE, + DMA_ATTR_WEAK_ORDERING); + if (dma_mapping_error(&pdev->dev, mapping)) { + __free_page(page); + return -EIO; + } } if (unlikely(test_bit(sw_prod, rxr->rx_agg_bmap))) @@ -1105,7 +1113,7 @@ static u32 __bnxt_rx_agg_pages(struct bnxt *bp, } dma_unmap_page_attrs(&pdev->dev, mapping, BNXT_RX_PAGE_SIZE, - DMA_FROM_DEVICE, + bp->rx_dir, DMA_ATTR_WEAK_ORDERING); total_frag_len += frag_len; @@ -2936,14 +2944,23 @@ static void bnxt_free_one_rx_ring_skbs(struct bnxt *bp, int ring_nr) if (!page) continue; - dma_unmap_page_attrs(&pdev->dev, rx_agg_buf->mapping, - BNXT_RX_PAGE_SIZE, DMA_FROM_DEVICE, - DMA_ATTR_WEAK_ORDERING); + if (BNXT_RX_PAGE_MODE(bp)) { + dma_unmap_page_attrs(&pdev->dev, rx_agg_buf->mapping, + BNXT_RX_PAGE_SIZE, bp->rx_dir, + DMA_ATTR_WEAK_ORDERING); + rx_agg_buf->page = NULL; + __clear_bit(i, rxr->rx_agg_bmap); - rx_agg_buf->page = NULL; - __clear_bit(i, rxr->rx_agg_bmap); + page_pool_recycle_direct(rxr->page_pool, page); + } else { + dma_unmap_page_attrs(&pdev->dev, rx_agg_buf->mapping, + BNXT_RX_PAGE_SIZE, DMA_FROM_DEVICE, + DMA_ATTR_WEAK_ORDERING); + rx_agg_buf->page = NULL; + __clear_bit(i, rxr->rx_agg_bmap); - __free_page(page); + __free_page(page); + } } skip_rx_agg_free: