From patchwork Tue Apr 30 22:44:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Chan X-Patchwork-Id: 13650006 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pl1-f180.google.com (mail-pl1-f180.google.com [209.85.214.180]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 82E411CD21 for ; Tue, 30 Apr 2024 22:46:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.180 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714517176; cv=none; b=tkQb0QB5Ke4ZhNtlX05Y0cjeXgzPrGdVdYWZ/xSuBa0Kts5Th5VLJX6D3vXFR7C2G+8hmd9fhnV8CfzHvzsYJcx3NdVA40HtH8JoPnjHo1pVkAcoXcMsPZqu4P7eYmZmU9FlOxdUUS2hu24lGdxaBRV3e8x7xVZghR0S91YsOzk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714517176; c=relaxed/simple; bh=b9xN8kwZWqeZgpQmlkl0t17iScBMtGRr7qoWQ16qxIQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version:Content-Type; b=hGqK7GXu8hMIu7ziGQfvu6KemMFxIhmspa8rZg7zpnJtiT1GtcReEmtOVL7eF+nu/i16TGwhcNOrXUE2bXiq5ZDjdjObK0gnYqNy7g3X3ySO3omTbnQwpILmR4dViKHtot/huTNM1bTaK2HbayKhGYMDCJfBMR5eb+dGZMlWlN0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=broadcom.com; spf=fail smtp.mailfrom=broadcom.com; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b=ahPulP1H; arc=none smtp.client-ip=209.85.214.180 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=broadcom.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=broadcom.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b="ahPulP1H" Received: by mail-pl1-f180.google.com with SMTP id d9443c01a7336-1e5c7d087e1so54314815ad.0 for ; Tue, 30 Apr 2024 15:46:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1714517174; x=1715121974; darn=vger.kernel.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=firrJvki98oSDb8LiFa5e2fqHhjUz+5CIjU/D7FJC9I=; b=ahPulP1HdOvoSA8FsoMRwfELRXiuzLXf/3vmi/sT66Iv1esJtbziuCocdPjPAM7aW+ lbjOOUD+Gv2yTUM61ruLW9uwY071U/Jm/L1IHZV3fW1JUNgjgXJPsCwZvGhVSCL+Lz/3 0gVYDOSEuEJpuBcxBOilV7vyLol7xMIfTcU7o= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714517174; x=1715121974; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=firrJvki98oSDb8LiFa5e2fqHhjUz+5CIjU/D7FJC9I=; b=jQPo7qp14WEJ6JP3EHiiLvCle1XltzEj4tAemOU4MSJUMjN7pbBRHZa5hNZ/dvxsxn Gdceu/KSeaGWtwvE3Zq2MgcSteZOsUcnJTUe+gtbufv3p1vD55mk1dbJV8qDlhVT0G73 KcYQOa22snNHpr0o+2/+pLTyRTNeDoW9+pw9halp46Ac94oF28zhbp0NT77ZzJpFnDBK brz+zNATefqM4jXPVITWVHpJndutfLtSGItqelbz3BKmPYrMwKp2Gtc32iGsYQAT2KFa 3Yj5xIcEQixEej3YXvTP82S3ttyc48AjMuTX8FAIytDIRTdn31xo9XaJ8W+uRkNyvmjT ZfRg== X-Gm-Message-State: AOJu0Yw6/Tae7ijMhm/SL9nL5y6OQGsKT3m4Ma/hlU5lndyTmvawJ9j2 4uDrLB1/mpxuMDbI4rfPl1nW/Xk1ShBCioUpIETUr2j50RYaZC7cu0iBHm+tYw== X-Google-Smtp-Source: AGHT+IE8WnWL/zt0HUpP+sFwaoeamR7xWaJZXK2ME3IFamrs23oUGXt4Kt2Pvpe266mOFuS0198vpg== X-Received: by 2002:a17:902:d54d:b0:1eb:dae:714f with SMTP id z13-20020a170902d54d00b001eb0dae714fmr992904plf.9.1714517173359; Tue, 30 Apr 2024 15:46:13 -0700 (PDT) Received: from lvnvda5233.lvn.broadcom.net ([192.19.161.250]) by smtp.gmail.com with ESMTPSA id s11-20020a170902a50b00b001eb2fb28eabsm7836476plq.227.2024.04.30.15.46.12 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 30 Apr 2024 15:46:12 -0700 (PDT) From: Michael Chan To: davem@davemloft.net Cc: netdev@vger.kernel.org, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, andrew.gospodarek@broadcom.com, Edwin Peer Subject: [PATCH net-next 2/7] bnxt_en: share NQ ring sw_stats memory with subrings Date: Tue, 30 Apr 2024 15:44:33 -0700 Message-Id: <20240430224438.91494-3-michael.chan@broadcom.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20240430224438.91494-1-michael.chan@broadcom.com> References: <20240430224438.91494-1-michael.chan@broadcom.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Edwin Peer On P5_PLUS chips and later, the NQ rings have subrings for RX and TX completions respectively. These subrings are passed to the poll function instead of the base NQ, but each ring carries its own copy of the software ring statistics. For stats to be conveniently accessible in __bnxt_poll_work(), the statistics memory should either be shared between the NQ and its subrings or the subrings need to be included in the ethtool stats aggregation logic. This patch opts for the former, because it's more efficient and less confusing having the software statistics for a ring exist in a single place. Before this patch, the counter will not be displayed if the "wrong" cpr->sw_stats was used to increment a counter. Link: https://lore.kernel.org/netdev/CACKFLikEhVAJA+osD7UjQNotdGte+fth7zOy7yDdLkTyFk9Pyw@mail.gmail.com/ Signed-off-by: Edwin Peer Signed-off-by: Michael Chan --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 40 +++++++++++-------- drivers/net/ethernet/broadcom/bnxt/bnxt.h | 2 +- .../net/ethernet/broadcom/bnxt/bnxt_ethtool.c | 4 +- 3 files changed, 27 insertions(+), 19 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 0eb880766012..38134b995478 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -1811,7 +1811,7 @@ static inline struct sk_buff *bnxt_tpa_end(struct bnxt *bp, skb = bnxt_copy_skb(bnapi, data_ptr, len, mapping); if (!skb) { bnxt_abort_tpa(cpr, idx, agg_bufs); - cpr->bnapi->cp_ring.sw_stats.rx.rx_oom_discards += 1; + cpr->sw_stats->rx.rx_oom_discards += 1; return NULL; } } else { @@ -1821,7 +1821,7 @@ static inline struct sk_buff *bnxt_tpa_end(struct bnxt *bp, new_data = __bnxt_alloc_rx_frag(bp, &new_mapping, GFP_ATOMIC); if (!new_data) { bnxt_abort_tpa(cpr, idx, agg_bufs); - cpr->bnapi->cp_ring.sw_stats.rx.rx_oom_discards += 1; + cpr->sw_stats->rx.rx_oom_discards += 1; return NULL; } @@ -1837,7 +1837,7 @@ static inline struct sk_buff *bnxt_tpa_end(struct bnxt *bp, if (!skb) { skb_free_frag(data); bnxt_abort_tpa(cpr, idx, agg_bufs); - cpr->bnapi->cp_ring.sw_stats.rx.rx_oom_discards += 1; + cpr->sw_stats->rx.rx_oom_discards += 1; return NULL; } skb_reserve(skb, bp->rx_offset); @@ -1848,7 +1848,7 @@ static inline struct sk_buff *bnxt_tpa_end(struct bnxt *bp, skb = bnxt_rx_agg_pages_skb(bp, cpr, skb, idx, agg_bufs, true); if (!skb) { /* Page reuse already handled by bnxt_rx_pages(). */ - cpr->bnapi->cp_ring.sw_stats.rx.rx_oom_discards += 1; + cpr->sw_stats->rx.rx_oom_discards += 1; return NULL; } } @@ -2106,7 +2106,7 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr, rc = -EIO; if (rx_err & RX_CMPL_ERRORS_BUFFER_ERROR_MASK) { - bnapi->cp_ring.sw_stats.rx.rx_buf_errors++; + bnapi->cp_ring.sw_stats->rx.rx_buf_errors++; if (!(bp->flags & BNXT_FLAG_CHIP_P5_PLUS) && !(bp->fw_cap & BNXT_FW_CAP_RING_MONITOR)) { netdev_warn_once(bp->dev, "RX buffer error %x\n", @@ -2222,7 +2222,7 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr, } else { if (rxcmp1->rx_cmp_cfa_code_errors_v2 & RX_CMP_L4_CS_ERR_BITS) { if (dev->features & NETIF_F_RXCSUM) - bnapi->cp_ring.sw_stats.rx.rx_l4_csum_errors++; + bnapi->cp_ring.sw_stats->rx.rx_l4_csum_errors++; } } @@ -2259,7 +2259,7 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr, return rc; oom_next_rx: - cpr->bnapi->cp_ring.sw_stats.rx.rx_oom_discards += 1; + cpr->sw_stats->rx.rx_oom_discards += 1; rc = -ENOMEM; goto next_rx; } @@ -2308,7 +2308,7 @@ static int bnxt_force_rx_discard(struct bnxt *bp, } rc = bnxt_rx_pkt(bp, cpr, raw_cons, event); if (rc && rc != -EBUSY) - cpr->bnapi->cp_ring.sw_stats.rx.rx_netpoll_discards += 1; + cpr->sw_stats->rx.rx_netpoll_discards += 1; return rc; } @@ -3951,6 +3951,7 @@ static int bnxt_alloc_cp_rings(struct bnxt *bp) if (rc) return rc; cpr2->bnapi = bnapi; + cpr2->sw_stats = cpr->sw_stats; cpr2->cp_idx = k; if (!k && rx) { bp->rx_ring[i].rx_cpr = cpr2; @@ -4792,6 +4793,9 @@ static void bnxt_free_ring_stats(struct bnxt *bp) struct bnxt_cp_ring_info *cpr = &bnapi->cp_ring; bnxt_free_stats_mem(bp, &cpr->stats); + + kfree(cpr->sw_stats); + cpr->sw_stats = NULL; } } @@ -4806,6 +4810,10 @@ static int bnxt_alloc_stats(struct bnxt *bp) struct bnxt_napi *bnapi = bp->bnapi[i]; struct bnxt_cp_ring_info *cpr = &bnapi->cp_ring; + cpr->sw_stats = kzalloc(sizeof(*cpr->sw_stats), GFP_KERNEL); + if (!cpr->sw_stats) + return -ENOMEM; + cpr->stats.len = size; rc = bnxt_alloc_stats_mem(bp, &cpr->stats, !i); if (rc) @@ -10807,9 +10815,9 @@ static void bnxt_disable_napi(struct bnxt *bp) cpr = &bnapi->cp_ring; if (bnapi->tx_fault) - cpr->sw_stats.tx.tx_resets++; + cpr->sw_stats->tx.tx_resets++; if (bnapi->in_reset) - cpr->sw_stats.rx.rx_resets++; + cpr->sw_stats->rx.rx_resets++; napi_disable(&bnapi->napi); if (bnapi->rx_ring) cancel_work_sync(&cpr->dim.work); @@ -12334,8 +12342,8 @@ static void bnxt_get_ring_stats(struct bnxt *bp, stats->tx_dropped += BNXT_GET_RING_STATS64(sw, tx_error_pkts); stats->rx_dropped += - cpr->sw_stats.rx.rx_netpoll_discards + - cpr->sw_stats.rx.rx_oom_discards; + cpr->sw_stats->rx.rx_netpoll_discards + + cpr->sw_stats->rx.rx_oom_discards; } } @@ -12402,7 +12410,7 @@ static void bnxt_get_one_ring_err_stats(struct bnxt *bp, struct bnxt_total_ring_err_stats *stats, struct bnxt_cp_ring_info *cpr) { - struct bnxt_sw_stats *sw_stats = &cpr->sw_stats; + struct bnxt_sw_stats *sw_stats = cpr->sw_stats; u64 *hw_stats = cpr->stats.sw_stats; stats->rx_total_l4_csum_errors += sw_stats->rx.rx_l4_csum_errors; @@ -13245,7 +13253,7 @@ static void bnxt_rx_ring_reset(struct bnxt *bp) rxr->bnapi->in_reset = false; bnxt_alloc_one_rx_ring(bp, i); cpr = &rxr->bnapi->cp_ring; - cpr->sw_stats.rx.rx_resets++; + cpr->sw_stats->rx.rx_resets++; if (bp->flags & BNXT_FLAG_AGG_RINGS) bnxt_db_write(bp, &rxr->rx_agg_db, rxr->rx_agg_prod); bnxt_db_write(bp, &rxr->rx_db, rxr->rx_prod); @@ -13457,7 +13465,7 @@ static void bnxt_chk_missed_irq(struct bnxt *bp) bnxt_dbg_hwrm_ring_info_get(bp, DBG_RING_INFO_GET_REQ_RING_TYPE_L2_CMPL, fw_ring_id, &val[0], &val[1]); - cpr->sw_stats.cmn.missed_irqs++; + cpr->sw_stats->cmn.missed_irqs++; } } } @@ -14765,7 +14773,7 @@ static void bnxt_get_queue_stats_rx(struct net_device *dev, int i, stats->bytes += BNXT_GET_RING_STATS64(sw, rx_mcast_bytes); stats->bytes += BNXT_GET_RING_STATS64(sw, rx_bcast_bytes); - stats->alloc_fail = cpr->sw_stats.rx.rx_oom_discards; + stats->alloc_fail = cpr->sw_stats->rx.rx_oom_discards; } static void bnxt_get_queue_stats_tx(struct net_device *dev, int i, diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h index 0c680032ab66..003c00b229f2 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -1152,7 +1152,7 @@ struct bnxt_cp_ring_info { struct bnxt_stats_mem stats; u32 hw_stats_ctx_id; - struct bnxt_sw_stats sw_stats; + struct bnxt_sw_stats *sw_stats; struct bnxt_ring_struct cp_ring_struct; diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c index 68444234b268..6de3cfcea90f 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c @@ -631,13 +631,13 @@ static void bnxt_get_ethtool_stats(struct net_device *dev, buf[j] = sw_stats[k]; skip_tpa_ring_stats: - sw = (u64 *)&cpr->sw_stats.rx; + sw = (u64 *)&cpr->sw_stats->rx; if (is_rx_ring(bp, i)) { for (k = 0; k < NUM_RING_RX_SW_STATS; j++, k++) buf[j] = sw[k]; } - sw = (u64 *)&cpr->sw_stats.cmn; + sw = (u64 *)&cpr->sw_stats->cmn; for (k = 0; k < NUM_RING_CMN_SW_STATS; j++, k++) buf[j] = sw[k]; }