From patchwork Mon Aug 2 14:52:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Chan X-Patchwork-Id: 12414189 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,MIME_HEADER_CTYPE_ONLY, SPF_HELO_NONE,SPF_PASS,T_TVD_MIME_NO_HEADERS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14B6DC4338F for ; Mon, 2 Aug 2021 14:53:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EC1F060FF2 for ; Mon, 2 Aug 2021 14:53:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234584AbhHBOxQ (ORCPT ); Mon, 2 Aug 2021 10:53:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54046 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233925AbhHBOxN (ORCPT ); Mon, 2 Aug 2021 10:53:13 -0400 Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com [IPv6:2607:f8b0:4864:20::102a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0AF0AC06175F for ; Mon, 2 Aug 2021 07:53:04 -0700 (PDT) Received: by mail-pj1-x102a.google.com with SMTP id e2-20020a17090a4a02b029016f3020d867so323359pjh.3 for ; Mon, 02 Aug 2021 07:53:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Unw/1dUOkFV+Y+5MwzCtczBIhLH6tSY0cbgzIrt1FCw=; b=NEuM+YOnqmZC90S/TZDDtGFrP1+WNVQiuEIMgisbceX7qU8mlvY86QbwaTacUe68Xa x2K0RmJ7NMv7b80qZyzzixNHd1RZZ+9cBUZej8Bvxt7/wMvHRQhGtmgoOq/1CcYTO/qK 2eSIs7VmouwPxCfovF1852W/BXSI8PsIQFrHg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Unw/1dUOkFV+Y+5MwzCtczBIhLH6tSY0cbgzIrt1FCw=; b=U2lf+eIUU2sGw/RwQCHuBZzkzO6whJc94ZosDz8TKo/zT7SReSaO2Mf1ogKC8XnAUu K3L2Lvb+6F7/ctotYkDcWTtk/uYv85IE7XgvSBuP7RaOn395n0VQWelA6JOITyXaOGtD M24rzcgX+kgyw1OlwytC3N/rgGrd5v98YB6KD943PpPHC9Zmhh+1yGuWJiheu1ftVFgA mmNaC1TRkCyNrk2yV8OXq4GB3sGqDG2X/Y2jRbUp26hdsVP2eNRg5RH3A89E/imqrJ99 prPaHh4rb/5jSPUnNwxkH44jF2aVVqOXs8sJ2EViJWbEX8HDXsVlkgWZfNM/z9jPRl4h yH3Q== X-Gm-Message-State: AOAM533axEajeDoE/oUDxmQobw6CqjQSIDMbLGEWd/64DteBKulPahNn XtJIDc1+EbDV14BPAmhiOMOZMA== X-Google-Smtp-Source: ABdhPJwMt0xCD4IszOS8db51tPh9PIQLgIGHVkopPYi0xAtIEmdrfB77r8g9rar7qaFwoQUFjhxk5A== X-Received: by 2002:a05:6a00:1786:b029:32c:c315:7348 with SMTP id s6-20020a056a001786b029032cc3157348mr17475647pfg.42.1627915983034; Mon, 02 Aug 2021 07:53:03 -0700 (PDT) Received: from localhost.swdvt.lab.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id j5sm13201832pgg.41.2021.08.02.07.53.00 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 02 Aug 2021 07:53:02 -0700 (PDT) From: Michael Chan To: davem@davemloft.net Cc: netdev@vger.kernel.org, kuba@kernel.org, gospo@broadcom.com Subject: [PATCH net-next 1/2] bnxt_en: Don't use static arrays for completion ring pages Date: Mon, 2 Aug 2021 10:52:38 -0400 Message-Id: <1627915959-1648-2-git-send-email-michael.chan@broadcom.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1627915959-1648-1-git-send-email-michael.chan@broadcom.com> References: <1627915959-1648-1-git-send-email-michael.chan@broadcom.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org We currently store these page addresses and DMA addreses in static arrays. On systems with 4K pages, we support up to 64 pages per completion ring. The actual number of pages for each completion ring may be much less than 64. For example, when the RX ring size is set to the default 511 entries, only 16 completion ring pages are needed per ring. In the next patch, we'll be doubling the maximum number of completion pages. So we convert to allocate these arrays as needed instead of declaring them statically. Reviewed-by: Pavan Chebbi Reviewed-by: Somnath Kotur Reviewed-by: Edwin Peer Signed-off-by: Michael Chan --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 65 +++++++++++++++++++++++ drivers/net/ethernet/broadcom/bnxt/bnxt.h | 6 +-- 2 files changed, 68 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 03b821897cf3..cc758a66fac0 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -3163,6 +3163,58 @@ static int bnxt_alloc_tx_rings(struct bnxt *bp) return 0; } +static void bnxt_free_cp_arrays(struct bnxt_cp_ring_info *cpr) +{ + kfree(cpr->cp_desc_ring); + cpr->cp_desc_ring = NULL; + kfree(cpr->cp_desc_mapping); + cpr->cp_desc_mapping = NULL; +} + +static int bnxt_alloc_cp_arrays(struct bnxt_cp_ring_info *cpr, int n) +{ + cpr->cp_desc_ring = kcalloc(n, sizeof(*cpr->cp_desc_ring), GFP_KERNEL); + if (!cpr->cp_desc_ring) + return -ENOMEM; + cpr->cp_desc_mapping = kcalloc(n, sizeof(*cpr->cp_desc_mapping), + GFP_KERNEL); + if (!cpr->cp_desc_mapping) + return -ENOMEM; + return 0; +} + +static void bnxt_free_all_cp_arrays(struct bnxt *bp) +{ + int i; + + if (!bp->bnapi) + return; + for (i = 0; i < bp->cp_nr_rings; i++) { + struct bnxt_napi *bnapi = bp->bnapi[i]; + + if (!bnapi) + continue; + bnxt_free_cp_arrays(&bnapi->cp_ring); + } +} + +static int bnxt_alloc_all_cp_arrays(struct bnxt *bp) +{ + int i, n = bp->cp_nr_pages; + + for (i = 0; i < bp->cp_nr_rings; i++) { + struct bnxt_napi *bnapi = bp->bnapi[i]; + int rc; + + if (!bnapi) + continue; + rc = bnxt_alloc_cp_arrays(&bnapi->cp_ring, n); + if (rc) + return rc; + } + return 0; +} + static void bnxt_free_cp_rings(struct bnxt *bp) { int i; @@ -3190,6 +3242,7 @@ static void bnxt_free_cp_rings(struct bnxt *bp) if (cpr2) { ring = &cpr2->cp_ring_struct; bnxt_free_ring(bp, &ring->ring_mem); + bnxt_free_cp_arrays(cpr2); kfree(cpr2); cpr->cp_ring_arr[j] = NULL; } @@ -3208,6 +3261,12 @@ static struct bnxt_cp_ring_info *bnxt_alloc_cp_sub_ring(struct bnxt *bp) if (!cpr) return NULL; + rc = bnxt_alloc_cp_arrays(cpr, bp->cp_nr_pages); + if (rc) { + bnxt_free_cp_arrays(cpr); + kfree(cpr); + return NULL; + } ring = &cpr->cp_ring_struct; rmem = &ring->ring_mem; rmem->nr_pages = bp->cp_nr_pages; @@ -3218,6 +3277,7 @@ static struct bnxt_cp_ring_info *bnxt_alloc_cp_sub_ring(struct bnxt *bp) rc = bnxt_alloc_ring(bp, rmem); if (rc) { bnxt_free_ring(bp, rmem); + bnxt_free_cp_arrays(cpr); kfree(cpr); cpr = NULL; } @@ -4253,6 +4313,7 @@ static void bnxt_free_mem(struct bnxt *bp, bool irq_re_init) bnxt_free_tx_rings(bp); bnxt_free_rx_rings(bp); bnxt_free_cp_rings(bp); + bnxt_free_all_cp_arrays(bp); bnxt_free_ntp_fltrs(bp, irq_re_init); if (irq_re_init) { bnxt_free_ring_stats(bp); @@ -4373,6 +4434,10 @@ static int bnxt_alloc_mem(struct bnxt *bp, bool irq_re_init) goto alloc_mem_err; } + rc = bnxt_alloc_all_cp_arrays(bp); + if (rc) + goto alloc_mem_err; + bnxt_init_ring_struct(bp); rc = bnxt_alloc_rx_rings(bp); diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h index e379c48c1df9..eba8d8f0ac81 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -972,11 +972,11 @@ struct bnxt_cp_ring_info { struct dim dim; union { - struct tx_cmp *cp_desc_ring[MAX_CP_PAGES]; - struct nqe_cn *nq_desc_ring[MAX_CP_PAGES]; + struct tx_cmp **cp_desc_ring; + struct nqe_cn **nq_desc_ring; }; - dma_addr_t cp_desc_mapping[MAX_CP_PAGES]; + dma_addr_t *cp_desc_mapping; struct bnxt_stats_mem stats; u32 hw_stats_ctx_id;