From patchwork Wed May 1 00:30:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Chan X-Patchwork-Id: 13650144 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-yw1-f177.google.com (mail-yw1-f177.google.com [209.85.128.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 367FB1396 for ; Wed, 1 May 2024 00:31:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714523502; cv=none; b=JbUMrsohO30O99/u37431U9oeFlS40NNsknE2MMSvwjr/J3Jmb/tcZ2qUwgt7aYir1UU3JPjQo15oWJ4YbfgUNZUo349+UyJeo+sqGl+nE7jIEwoDdh9oXUvTyEsjxR7lKciyVbMsvscRgRZ/slEVDzUFh/udFoVgTLpPAmDLWI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714523502; c=relaxed/simple; bh=UvdcPMNatHQO7YLpj9VCVesC64SQ9xOQOvMc6IgAG+Y=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version:Content-Type; b=GlStDt7dSos57UcXzLkVbZXevh2XWChAMXOmJ4p4Zjn4rSr5M2gDe/xOnnDZJLOUSh0Pi8SGi3CjEmLn0lLJZH1cdBpZwKn/Xu8csAN5YzdtB2jWXZKxr1jFk6Uv83IpQTMvPOhTWAh9fzcSlRuSzspZbjfS5gxsWzH3KRvinVc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=broadcom.com; spf=fail smtp.mailfrom=broadcom.com; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b=BVXPixbK; arc=none smtp.client-ip=209.85.128.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=broadcom.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=broadcom.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b="BVXPixbK" Received: by mail-yw1-f177.google.com with SMTP id 00721157ae682-61be674f5d1so19369477b3.2 for ; Tue, 30 Apr 2024 17:31:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1714523499; x=1715128299; darn=vger.kernel.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=wn1z9uZ6ZjakmLPsUlJwuwyrwc8Of/YwmJ14nkBh/Lo=; b=BVXPixbKEBFmGUAUwnyDDNCrN9rkoYvGvo0Q56oucq+JJATUlO6KCoOQd8+VA86FFY dCrxDtnOXIkGcMIf62Q4fdOGvSieDXzegc9WEgp5VeEp0U16mnsFLXGbRGiJ9vrgq2Kz uH+860PhW+boPHrlyzjkk2UT8DdyF5mRVxrME= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714523499; x=1715128299; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=wn1z9uZ6ZjakmLPsUlJwuwyrwc8Of/YwmJ14nkBh/Lo=; b=TYHleKlDTrhRPZOWRgP0TK2yafdnVt4n/m35Q/Tppg4uhsewDY8v2vnHcd4r7pllnL ZcYVRFrcptv45duYP2CmuflMnPWYeVbAQ5nXm5L2Qj/6iKMZ4Pk3GBzcz++vDrgoQLaQ wi1PkBi3QNydxqbu4UqApBRwbEPZupK7cU2d+cxfI4p7i5qw+vcRLYHrvvXsF3X9yVel fhR2cdB4Mu/gXTvZxKAvrIABFCSuE6HkQxCXHbZvJHmVcH12pvQmJR7P+9PeuAesBQA6 pk4t1sku6znf93wCZVlUXtMLVq14gcL42YYb7yJGP8+gZ8lvYCc+u9R+0EHsIPxaHPwc 59fw== X-Gm-Message-State: AOJu0Yw+7qxaGGovn0MAYtlVvxK+uKIKwLHWjfmdtB0gYWC5h7P52CHZ r7xOfSzOi41024moIAD3h8J4pj9y7tGcqLWw1b7HP+yog45d2TyYrlC3J9uwTw== X-Google-Smtp-Source: AGHT+IHHTZGenhz6xW79hZko/mhiXIbL6shSKm2SmSdzxOuY9kbT6vJ881w7CQKvO/VhASFPAM+PRA== X-Received: by 2002:a5b:43:0:b0:de6:fe6:68a2 with SMTP id e3-20020a5b0043000000b00de60fe668a2mr1290972ybp.23.1714523498743; Tue, 30 Apr 2024 17:31:38 -0700 (PDT) Received: from lvnvda5233.lvn.broadcom.net ([192.19.161.250]) by smtp.gmail.com with ESMTPSA id f15-20020ac8470f000000b0043a7cb47069sm4337935qtp.9.2024.04.30.17.31.37 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 30 Apr 2024 17:31:38 -0700 (PDT) From: Michael Chan To: davem@davemloft.net Cc: netdev@vger.kernel.org, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, andrew.gospodarek@broadcom.com, Edwin Peer Subject: [PATCH net-next v2 1/6] bnxt_en: share NQ ring sw_stats memory with subrings Date: Tue, 30 Apr 2024 17:30:51 -0700 Message-Id: <20240501003056.100607-2-michael.chan@broadcom.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20240501003056.100607-1-michael.chan@broadcom.com> References: <20240501003056.100607-1-michael.chan@broadcom.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Edwin Peer On P5_PLUS chips and later, the NQ rings have subrings for RX and TX completions respectively. These subrings are passed to the poll function instead of the base NQ, but each ring carries its own copy of the software ring statistics. For stats to be conveniently accessible in __bnxt_poll_work(), the statistics memory should either be shared between the NQ and its subrings or the subrings need to be included in the ethtool stats aggregation logic. This patch opts for the former, because it's more efficient and less confusing having the software statistics for a ring exist in a single place. Before this patch, the counter will not be displayed if the "wrong" cpr->sw_stats was used to increment a counter. Link: https://lore.kernel.org/netdev/CACKFLikEhVAJA+osD7UjQNotdGte+fth7zOy7yDdLkTyFk9Pyw@mail.gmail.com/ Signed-off-by: Edwin Peer Signed-off-by: Michael Chan Reviewed-by: Simon Horman --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 40 +++++++++++-------- drivers/net/ethernet/broadcom/bnxt/bnxt.h | 2 +- .../net/ethernet/broadcom/bnxt/bnxt_ethtool.c | 4 +- 3 files changed, 27 insertions(+), 19 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index be96bb494ae6..98bff01b89ff 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -1811,7 +1811,7 @@ static inline struct sk_buff *bnxt_tpa_end(struct bnxt *bp, skb = bnxt_copy_skb(bnapi, data_ptr, len, mapping); if (!skb) { bnxt_abort_tpa(cpr, idx, agg_bufs); - cpr->bnapi->cp_ring.sw_stats.rx.rx_oom_discards += 1; + cpr->sw_stats->rx.rx_oom_discards += 1; return NULL; } } else { @@ -1821,7 +1821,7 @@ static inline struct sk_buff *bnxt_tpa_end(struct bnxt *bp, new_data = __bnxt_alloc_rx_frag(bp, &new_mapping, GFP_ATOMIC); if (!new_data) { bnxt_abort_tpa(cpr, idx, agg_bufs); - cpr->bnapi->cp_ring.sw_stats.rx.rx_oom_discards += 1; + cpr->sw_stats->rx.rx_oom_discards += 1; return NULL; } @@ -1837,7 +1837,7 @@ static inline struct sk_buff *bnxt_tpa_end(struct bnxt *bp, if (!skb) { skb_free_frag(data); bnxt_abort_tpa(cpr, idx, agg_bufs); - cpr->bnapi->cp_ring.sw_stats.rx.rx_oom_discards += 1; + cpr->sw_stats->rx.rx_oom_discards += 1; return NULL; } skb_reserve(skb, bp->rx_offset); @@ -1848,7 +1848,7 @@ static inline struct sk_buff *bnxt_tpa_end(struct bnxt *bp, skb = bnxt_rx_agg_pages_skb(bp, cpr, skb, idx, agg_bufs, true); if (!skb) { /* Page reuse already handled by bnxt_rx_pages(). */ - cpr->bnapi->cp_ring.sw_stats.rx.rx_oom_discards += 1; + cpr->sw_stats->rx.rx_oom_discards += 1; return NULL; } } @@ -2106,7 +2106,7 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr, rc = -EIO; if (rx_err & RX_CMPL_ERRORS_BUFFER_ERROR_MASK) { - bnapi->cp_ring.sw_stats.rx.rx_buf_errors++; + bnapi->cp_ring.sw_stats->rx.rx_buf_errors++; if (!(bp->flags & BNXT_FLAG_CHIP_P5_PLUS) && !(bp->fw_cap & BNXT_FW_CAP_RING_MONITOR)) { netdev_warn_once(bp->dev, "RX buffer error %x\n", @@ -2222,7 +2222,7 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr, } else { if (rxcmp1->rx_cmp_cfa_code_errors_v2 & RX_CMP_L4_CS_ERR_BITS) { if (dev->features & NETIF_F_RXCSUM) - bnapi->cp_ring.sw_stats.rx.rx_l4_csum_errors++; + bnapi->cp_ring.sw_stats->rx.rx_l4_csum_errors++; } } @@ -2259,7 +2259,7 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr, return rc; oom_next_rx: - cpr->bnapi->cp_ring.sw_stats.rx.rx_oom_discards += 1; + cpr->sw_stats->rx.rx_oom_discards += 1; rc = -ENOMEM; goto next_rx; } @@ -2308,7 +2308,7 @@ static int bnxt_force_rx_discard(struct bnxt *bp, } rc = bnxt_rx_pkt(bp, cpr, raw_cons, event); if (rc && rc != -EBUSY) - cpr->bnapi->cp_ring.sw_stats.rx.rx_netpoll_discards += 1; + cpr->sw_stats->rx.rx_netpoll_discards += 1; return rc; } @@ -3951,6 +3951,7 @@ static int bnxt_alloc_cp_rings(struct bnxt *bp) if (rc) return rc; cpr2->bnapi = bnapi; + cpr2->sw_stats = cpr->sw_stats; cpr2->cp_idx = k; if (!k && rx) { bp->rx_ring[i].rx_cpr = cpr2; @@ -4792,6 +4793,9 @@ static void bnxt_free_ring_stats(struct bnxt *bp) struct bnxt_cp_ring_info *cpr = &bnapi->cp_ring; bnxt_free_stats_mem(bp, &cpr->stats); + + kfree(cpr->sw_stats); + cpr->sw_stats = NULL; } } @@ -4806,6 +4810,10 @@ static int bnxt_alloc_stats(struct bnxt *bp) struct bnxt_napi *bnapi = bp->bnapi[i]; struct bnxt_cp_ring_info *cpr = &bnapi->cp_ring; + cpr->sw_stats = kzalloc(sizeof(*cpr->sw_stats), GFP_KERNEL); + if (!cpr->sw_stats) + return -ENOMEM; + cpr->stats.len = size; rc = bnxt_alloc_stats_mem(bp, &cpr->stats, !i); if (rc) @@ -10811,9 +10819,9 @@ static void bnxt_disable_napi(struct bnxt *bp) cpr = &bnapi->cp_ring; if (bnapi->tx_fault) - cpr->sw_stats.tx.tx_resets++; + cpr->sw_stats->tx.tx_resets++; if (bnapi->in_reset) - cpr->sw_stats.rx.rx_resets++; + cpr->sw_stats->rx.rx_resets++; napi_disable(&bnapi->napi); if (bnapi->rx_ring) cancel_work_sync(&cpr->dim.work); @@ -12338,8 +12346,8 @@ static void bnxt_get_ring_stats(struct bnxt *bp, stats->tx_dropped += BNXT_GET_RING_STATS64(sw, tx_error_pkts); stats->rx_dropped += - cpr->sw_stats.rx.rx_netpoll_discards + - cpr->sw_stats.rx.rx_oom_discards; + cpr->sw_stats->rx.rx_netpoll_discards + + cpr->sw_stats->rx.rx_oom_discards; } } @@ -12406,7 +12414,7 @@ static void bnxt_get_one_ring_err_stats(struct bnxt *bp, struct bnxt_total_ring_err_stats *stats, struct bnxt_cp_ring_info *cpr) { - struct bnxt_sw_stats *sw_stats = &cpr->sw_stats; + struct bnxt_sw_stats *sw_stats = cpr->sw_stats; u64 *hw_stats = cpr->stats.sw_stats; stats->rx_total_l4_csum_errors += sw_stats->rx.rx_l4_csum_errors; @@ -13249,7 +13257,7 @@ static void bnxt_rx_ring_reset(struct bnxt *bp) rxr->bnapi->in_reset = false; bnxt_alloc_one_rx_ring(bp, i); cpr = &rxr->bnapi->cp_ring; - cpr->sw_stats.rx.rx_resets++; + cpr->sw_stats->rx.rx_resets++; if (bp->flags & BNXT_FLAG_AGG_RINGS) bnxt_db_write(bp, &rxr->rx_agg_db, rxr->rx_agg_prod); bnxt_db_write(bp, &rxr->rx_db, rxr->rx_prod); @@ -13461,7 +13469,7 @@ static void bnxt_chk_missed_irq(struct bnxt *bp) bnxt_dbg_hwrm_ring_info_get(bp, DBG_RING_INFO_GET_REQ_RING_TYPE_L2_CMPL, fw_ring_id, &val[0], &val[1]); - cpr->sw_stats.cmn.missed_irqs++; + cpr->sw_stats->cmn.missed_irqs++; } } } @@ -14769,7 +14777,7 @@ static void bnxt_get_queue_stats_rx(struct net_device *dev, int i, stats->bytes += BNXT_GET_RING_STATS64(sw, rx_mcast_bytes); stats->bytes += BNXT_GET_RING_STATS64(sw, rx_bcast_bytes); - stats->alloc_fail = cpr->sw_stats.rx.rx_oom_discards; + stats->alloc_fail = cpr->sw_stats->rx.rx_oom_discards; } static void bnxt_get_queue_stats_tx(struct net_device *dev, int i, diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h index ad57ef051798..631b0039d72b 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -1152,7 +1152,7 @@ struct bnxt_cp_ring_info { struct bnxt_stats_mem stats; u32 hw_stats_ctx_id; - struct bnxt_sw_stats sw_stats; + struct bnxt_sw_stats *sw_stats; struct bnxt_ring_struct cp_ring_struct; diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c index 68444234b268..6de3cfcea90f 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c @@ -631,13 +631,13 @@ static void bnxt_get_ethtool_stats(struct net_device *dev, buf[j] = sw_stats[k]; skip_tpa_ring_stats: - sw = (u64 *)&cpr->sw_stats.rx; + sw = (u64 *)&cpr->sw_stats->rx; if (is_rx_ring(bp, i)) { for (k = 0; k < NUM_RING_RX_SW_STATS; j++, k++) buf[j] = sw[k]; } - sw = (u64 *)&cpr->sw_stats.cmn; + sw = (u64 *)&cpr->sw_stats->cmn; for (k = 0; k < NUM_RING_CMN_SW_STATS; j++, k++) buf[j] = sw[k]; }