From patchwork Tue Dec 14 22:42:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joe Damato X-Patchwork-Id: 12676813 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2DDE2C433F5 for ; Tue, 14 Dec 2021 22:43:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234193AbhLNWnH (ORCPT ); Tue, 14 Dec 2021 17:43:07 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34114 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233085AbhLNWnH (ORCPT ); Tue, 14 Dec 2021 17:43:07 -0500 Received: from mail-pl1-x634.google.com (mail-pl1-x634.google.com [IPv6:2607:f8b0:4864:20::634]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2782AC061574 for ; Tue, 14 Dec 2021 14:43:07 -0800 (PST) Received: by mail-pl1-x634.google.com with SMTP id q17so14734340plr.11 for ; Tue, 14 Dec 2021 14:43:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fastly.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=8Ll/megliBvnHirIhlfhf9sv9I4xbL9uGA2hDg8OW1A=; b=oArXG4XItRxuvjy9gbxOQHZcIDmwjotDa55nsFktI/Ns7HH590LpkhKUwUsvi3G4CW xqEgzueXDhJEmj/gpQ8vz6zQcCUt03dZ4Ql20t1R0F00j/NYxxshodzKFMBOz0i/AHsF qwXQhAD+FPKOHYLXBqDrqWL7MTCns+COl+PIs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=8Ll/megliBvnHirIhlfhf9sv9I4xbL9uGA2hDg8OW1A=; b=S2xblo0Okhl7Sok6ZnLnJS0N03Gc3b7acGB/27xVLNwTHj2sE5ZcupkIWCZbmJJYVN z3hI+aNkWKSa+IuA6xLL1pzy67uLnsDAXqt2hje6iLEk/up8ihU05UpDR+YyziGWjn7Y midG7pP92r0wb/rcRGtRiEFxi7iWDPl3hp8TSeO/4/fofKgSnRK05Z1P1QH5ptcfeTi9 W6iHFMn1BhdiO/5QvIvRDBFM3ZHoux8V4dv7KVLMsOKM0K8b1xCwvkxyxCqiDIKbVLFq R57KLsOvcVXg1M5fPcOfe0pyyEgS27KNrqiZzuCf+2NOm6DpVbCbvkR5UzTkX93n+rO4 awoQ== X-Gm-Message-State: AOAM532hLlGHD0JhesEjOXO5H0Cvg/kAD9St/Wrl1IfK8kN1X1IxK6fY sPtDGRYCIb3wdF4aXQEkWC6axw== X-Google-Smtp-Source: ABdhPJxJHXV4YY3O2ob0ztz1/iKHz1y8Srlj7Vm61kglf3JYUPm93vWiYBIjWXwz1gRDUbvDFsvIZg== X-Received: by 2002:a17:90b:1c8d:: with SMTP id oo13mr8634212pjb.139.1639521786702; Tue, 14 Dec 2021 14:43:06 -0800 (PST) Received: from localhost.localdomain (c-73-223-190-181.hsd1.ca.comcast.net. [73.223.190.181]) by smtp.gmail.com with ESMTPSA id mg12sm3448012pjb.10.2021.12.14.14.43.05 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 14 Dec 2021 14:43:06 -0800 (PST) From: Joe Damato To: intel-wired-lan@lists.osuosl.org Cc: kuba@kernel.org, davem@davemloft.net, netdev@vger.kernel.org, jesse.brandeburg@intel.com, anthony.l.nguyen@intel.com, Joe Damato Subject: [net-queue PATCH 1/5] i40e: Remove rx page reuse double count. Date: Tue, 14 Dec 2021 14:42:06 -0800 Message-Id: <1639521730-57226-2-git-send-email-jdamato@fastly.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1639521730-57226-1-git-send-email-jdamato@fastly.com> References: <1639521730-57226-1-git-send-email-jdamato@fastly.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Page reuse was being tracked from two locations: - i40e_reuse_rx_page (via 40e_clean_rx_irq), and - i40e_alloc_mapped_page Remove the double count and only count reuse from i40e_alloc_mapped_page when the page is about to be reused. Signed-off-by: Joe Damato --- drivers/net/ethernet/intel/i40e/i40e_txrx.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c index 10a83e5..8b3ffb7 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c @@ -1382,8 +1382,6 @@ static void i40e_reuse_rx_page(struct i40e_ring *rx_ring, new_buff->page_offset = old_buff->page_offset; new_buff->pagecnt_bias = old_buff->pagecnt_bias; - rx_ring->rx_stats.page_reuse_count++; - /* clear contents of buffer_info */ old_buff->page = NULL; } From patchwork Tue Dec 14 22:42:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joe Damato X-Patchwork-Id: 12676815 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB5A7C433FE for ; Tue, 14 Dec 2021 22:43:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234239AbhLNWnK (ORCPT ); Tue, 14 Dec 2021 17:43:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34122 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233085AbhLNWnJ (ORCPT ); Tue, 14 Dec 2021 17:43:09 -0500 Received: from mail-pj1-x1031.google.com (mail-pj1-x1031.google.com [IPv6:2607:f8b0:4864:20::1031]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9A485C061574 for ; Tue, 14 Dec 2021 14:43:08 -0800 (PST) Received: by mail-pj1-x1031.google.com with SMTP id r34-20020a17090a43a500b001b0f0837ce8so2440086pjg.2 for ; Tue, 14 Dec 2021 14:43:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fastly.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=ZgtVJVvnbACKc4Enb0x5ug683st5CGBAW37lbrDODgg=; b=HxqmX4mTRoswZu9VjYM/t93M+y57BE//+Snlfzh41P/f943EIEdHuze7cacCYNIFme oO6MxIgBmQX331XvQtyqLm7KDdY/uGpOLp1/vPxORLlQk/GljxZXMnqgNdcriVKzHnrX zo+XUTWk7TOuCDdyUUlezIK3kkcfE0i3hJZD0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ZgtVJVvnbACKc4Enb0x5ug683st5CGBAW37lbrDODgg=; b=eb6/7+NbUuGMBgfYOU/p6gFh37NPwAbIo2OWDiCOsghPwRd9p3Ksm1OaVjVlFwq2Ja X+6MUBEVQ7Soiweyoxc7a31ccsWvKwFaiTxpKPrRrgeMIkivhpRbb4S4UlqlQjE2cOOb YcMSWakwAX7V9iNqUT/afAV7plryU1uurFqJH5GVGOxES+IrkEoM1ym+2qmwwMqZyT16 fUDsqhuqJjQd3vwFRtli0vT/RXtId8oO/Ivwo4udd0e9cXeNLyQ66vvSQSOe/lNkFsSj Ro7PFLZih2yKG+HHmj+vhcIfbH4Yk/FOvqBVR9TTNRrRiG2eQU60ZVvw9A/vF860rQg5 0NMQ== X-Gm-Message-State: AOAM530S1Flznf5/wHA8sMcuCyhh40muPambVJ8FgmieZs2EefO1eIal 638oCM88fvKNsh/wsa8vP0TX0Q== X-Google-Smtp-Source: ABdhPJwzT9LDDMmisoISJ/SbIwH0BCsdFvcLNKsOxB7wd+tACmY4G61jy9Wdf5ysB0dIwjY46IIgug== X-Received: by 2002:a17:902:a584:b0:143:c2e3:c4 with SMTP id az4-20020a170902a58400b00143c2e300c4mr8690280plb.69.1639521788141; Tue, 14 Dec 2021 14:43:08 -0800 (PST) Received: from localhost.localdomain (c-73-223-190-181.hsd1.ca.comcast.net. [73.223.190.181]) by smtp.gmail.com with ESMTPSA id mg12sm3448012pjb.10.2021.12.14.14.43.06 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 14 Dec 2021 14:43:07 -0800 (PST) From: Joe Damato To: intel-wired-lan@lists.osuosl.org Cc: kuba@kernel.org, davem@davemloft.net, netdev@vger.kernel.org, jesse.brandeburg@intel.com, anthony.l.nguyen@intel.com, Joe Damato Subject: [net-queue PATCH 2/5] i40e: Aggregate and export RX page reuse stat. Date: Tue, 14 Dec 2021 14:42:07 -0800 Message-Id: <1639521730-57226-3-git-send-email-jdamato@fastly.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1639521730-57226-1-git-send-email-jdamato@fastly.com> References: <1639521730-57226-1-git-send-email-jdamato@fastly.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org rx page reuse was already being tracked by the i40e driver per RX ring. Aggregate the counts and make them accessible via ethtool. Signed-off-by: Joe Damato --- drivers/net/ethernet/intel/i40e/i40e.h | 1 + drivers/net/ethernet/intel/i40e/i40e_ethtool.c | 1 + drivers/net/ethernet/intel/i40e/i40e_main.c | 5 ++++- 3 files changed, 6 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h index 7f40f87..b61f17bf 100644 --- a/drivers/net/ethernet/intel/i40e/i40e.h +++ b/drivers/net/ethernet/intel/i40e/i40e.h @@ -853,6 +853,7 @@ struct i40e_vsi { u64 tx_force_wb; u64 rx_buf_failed; u64 rx_page_failed; + u64 rx_page_reuse; /* These are containers of ring pointers, allocated at run-time */ struct i40e_ring **rx_rings; diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c index 513ba69..ceb0d5f 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c +++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c @@ -295,6 +295,7 @@ static const struct i40e_stats i40e_gstrings_misc_stats[] = { I40E_VSI_STAT("tx_busy", tx_busy), I40E_VSI_STAT("rx_alloc_fail", rx_buf_failed), I40E_VSI_STAT("rx_pg_alloc_fail", rx_page_failed), + I40E_VSI_STAT("rx_cache_reuse", rx_page_reuse), }; /* These PF_STATs might look like duplicates of some NETDEV_STATs, diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c index 4ff1c9b..6d3b0bc 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_main.c +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c @@ -812,7 +812,7 @@ static void i40e_update_vsi_stats(struct i40e_vsi *vsi) struct i40e_eth_stats *es; /* device's eth stats */ u64 tx_restart, tx_busy; struct i40e_ring *p; - u64 rx_page, rx_buf; + u64 rx_page, rx_buf, rx_reuse; u64 bytes, packets; unsigned int start; u64 tx_linearize; @@ -838,6 +838,7 @@ static void i40e_update_vsi_stats(struct i40e_vsi *vsi) tx_restart = tx_busy = tx_linearize = tx_force_wb = 0; rx_page = 0; rx_buf = 0; + rx_reuse = 0; rcu_read_lock(); for (q = 0; q < vsi->num_queue_pairs; q++) { /* locate Tx ring */ @@ -871,6 +872,7 @@ static void i40e_update_vsi_stats(struct i40e_vsi *vsi) rx_p += packets; rx_buf += p->rx_stats.alloc_buff_failed; rx_page += p->rx_stats.alloc_page_failed; + rx_reuse += p->rx_stats.page_reuse_count; if (i40e_enabled_xdp_vsi(vsi)) { /* locate XDP ring */ @@ -898,6 +900,7 @@ static void i40e_update_vsi_stats(struct i40e_vsi *vsi) vsi->tx_force_wb = tx_force_wb; vsi->rx_page_failed = rx_page; vsi->rx_buf_failed = rx_buf; + vsi->rx_page_reuse = rx_reuse; ns->rx_packets = rx_p; ns->rx_bytes = rx_b; From patchwork Tue Dec 14 22:42:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joe Damato X-Patchwork-Id: 12676817 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BCAF4C4332F for ; Tue, 14 Dec 2021 22:43:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234301AbhLNWnL (ORCPT ); Tue, 14 Dec 2021 17:43:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34128 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234213AbhLNWnK (ORCPT ); Tue, 14 Dec 2021 17:43:10 -0500 Received: from mail-pj1-x1032.google.com (mail-pj1-x1032.google.com [IPv6:2607:f8b0:4864:20::1032]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0E0E4C06173E for ; Tue, 14 Dec 2021 14:43:10 -0800 (PST) Received: by mail-pj1-x1032.google.com with SMTP id k6-20020a17090a7f0600b001ad9d73b20bso17353571pjl.3 for ; Tue, 14 Dec 2021 14:43:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fastly.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=cXLbOOaKhjGO3E8gN2CpwZMKvhA9B+iiFAqoEOQ+Usg=; b=Gb8fEwmpJD1S0gkVEZWABOIHoLGna9SzvqYAHT/p2SzMnGDoGs2aByWAGEV5vDJggK tVOAYDMs8FnbLbZZnUn1MEbOPxnO+Y+k3TOSqLv9AmO7xdRlC2AFO/UZdIXQTrfJs9/e r28ZOWRwFggMeqjoFTw2EfaBTbosZdKFmmDWk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=cXLbOOaKhjGO3E8gN2CpwZMKvhA9B+iiFAqoEOQ+Usg=; b=NhQFzX7Tz/BYnpa2WSC6jY6RrsDV3pbPKI54TYUiRjv1BpZDgH3D4vkiX1ATSADFkq wXqaPce2gD43nbJIotZvO31WQoEB1P9t2mCRt2f4FJdBvNbpoFbirV2viHyUpHHwGtw0 0vXzrYvKJx3kvLb3IBDcQQOuRsXEI4bah1VJuHEaZN3CZ4gNzZBKd4+kc4//+Gz1Aj8R N5B6CrowLHrXCSxUz5nq/UK5VxGAUieQ1WAIHyy73AngzIVNyZfMBRoILf2GNOzznrsU R2sOO+BrB6BklQ8/dkIGiRERnkgBAImI9bVt5UZ8UYZvvY5QiTHjNFznAzmQkWjnT2Ao Soww== X-Gm-Message-State: AOAM531nroaFP8FDRGwp1xjPmpXRYAnmtjgViusrKlLMeYliYbj9mS0+ buiWZUHacCt+cOjkImmVUE3yTg== X-Google-Smtp-Source: ABdhPJyfMY/ECw/n2chQbxc0QyP1z+G2CwtY+nsTeYdfcT0+LziUyi/WqC+sIsImE5+Md6OcrTrpSg== X-Received: by 2002:a17:90b:1812:: with SMTP id lw18mr8428148pjb.96.1639521789600; Tue, 14 Dec 2021 14:43:09 -0800 (PST) Received: from localhost.localdomain (c-73-223-190-181.hsd1.ca.comcast.net. [73.223.190.181]) by smtp.gmail.com with ESMTPSA id mg12sm3448012pjb.10.2021.12.14.14.43.08 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 14 Dec 2021 14:43:09 -0800 (PST) From: Joe Damato To: intel-wired-lan@lists.osuosl.org Cc: kuba@kernel.org, davem@davemloft.net, netdev@vger.kernel.org, jesse.brandeburg@intel.com, anthony.l.nguyen@intel.com, Joe Damato Subject: [net-queue PATCH 3/5] i40e: Add a stat tracking new RX page allocations. Date: Tue, 14 Dec 2021 14:42:08 -0800 Message-Id: <1639521730-57226-4-git-send-email-jdamato@fastly.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1639521730-57226-1-git-send-email-jdamato@fastly.com> References: <1639521730-57226-1-git-send-email-jdamato@fastly.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Add a counter for new page allocations in the i40e RX path. This stat is accessible with ethtool. Signed-off-by: Joe Damato --- drivers/net/ethernet/intel/i40e/i40e.h | 1 + drivers/net/ethernet/intel/i40e/i40e_ethtool.c | 1 + drivers/net/ethernet/intel/i40e/i40e_main.c | 5 ++++- drivers/net/ethernet/intel/i40e/i40e_txrx.c | 2 ++ drivers/net/ethernet/intel/i40e/i40e_txrx.h | 1 + 5 files changed, 9 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h index b61f17bf..ab73de2 100644 --- a/drivers/net/ethernet/intel/i40e/i40e.h +++ b/drivers/net/ethernet/intel/i40e/i40e.h @@ -854,6 +854,7 @@ struct i40e_vsi { u64 rx_buf_failed; u64 rx_page_failed; u64 rx_page_reuse; + u64 rx_page_alloc; /* These are containers of ring pointers, allocated at run-time */ struct i40e_ring **rx_rings; diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c index ceb0d5f..22f746b 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c +++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c @@ -296,6 +296,7 @@ static const struct i40e_stats i40e_gstrings_misc_stats[] = { I40E_VSI_STAT("rx_alloc_fail", rx_buf_failed), I40E_VSI_STAT("rx_pg_alloc_fail", rx_page_failed), I40E_VSI_STAT("rx_cache_reuse", rx_page_reuse), + I40E_VSI_STAT("rx_cache_alloc", rx_page_alloc), }; /* These PF_STATs might look like duplicates of some NETDEV_STATs, diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c index 6d3b0bc..33c3f04 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_main.c +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c @@ -812,7 +812,7 @@ static void i40e_update_vsi_stats(struct i40e_vsi *vsi) struct i40e_eth_stats *es; /* device's eth stats */ u64 tx_restart, tx_busy; struct i40e_ring *p; - u64 rx_page, rx_buf, rx_reuse; + u64 rx_page, rx_buf, rx_reuse, rx_alloc; u64 bytes, packets; unsigned int start; u64 tx_linearize; @@ -839,6 +839,7 @@ static void i40e_update_vsi_stats(struct i40e_vsi *vsi) rx_page = 0; rx_buf = 0; rx_reuse = 0; + rx_reuse = 0; rcu_read_lock(); for (q = 0; q < vsi->num_queue_pairs; q++) { /* locate Tx ring */ @@ -873,6 +874,7 @@ static void i40e_update_vsi_stats(struct i40e_vsi *vsi) rx_buf += p->rx_stats.alloc_buff_failed; rx_page += p->rx_stats.alloc_page_failed; rx_reuse += p->rx_stats.page_reuse_count; + rx_alloc += p->rx_stats.page_alloc_count; if (i40e_enabled_xdp_vsi(vsi)) { /* locate XDP ring */ @@ -901,6 +903,7 @@ static void i40e_update_vsi_stats(struct i40e_vsi *vsi) vsi->rx_page_failed = rx_page; vsi->rx_buf_failed = rx_buf; vsi->rx_page_reuse = rx_reuse; + vsi->rx_page_alloc = rx_alloc; ns->rx_packets = rx_p; ns->rx_bytes = rx_b; diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c index 8b3ffb7..1450efd 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c @@ -1671,6 +1671,8 @@ static bool i40e_alloc_mapped_page(struct i40e_ring *rx_ring, if (unlikely(!page)) { rx_ring->rx_stats.alloc_page_failed++; return false; + } else { + rx_ring->rx_stats.page_alloc_count++; } /* map page for use */ diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.h b/drivers/net/ethernet/intel/i40e/i40e_txrx.h index bfc2845..7041e81 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.h +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.h @@ -299,6 +299,7 @@ struct i40e_rx_queue_stats { u64 alloc_buff_failed; u64 page_reuse_count; u64 realloc_count; + u64 page_alloc_count; }; enum i40e_ring_state_t { From patchwork Tue Dec 14 22:42:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joe Damato X-Patchwork-Id: 12676821 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 257E1C433F5 for ; Tue, 14 Dec 2021 22:43:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234345AbhLNWnT (ORCPT ); Tue, 14 Dec 2021 17:43:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34162 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234346AbhLNWnQ (ORCPT ); Tue, 14 Dec 2021 17:43:16 -0500 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 79F88C061574 for ; Tue, 14 Dec 2021 14:43:16 -0800 (PST) Received: by mail-pj1-x102e.google.com with SMTP id v16so2436174pjn.1 for ; Tue, 14 Dec 2021 14:43:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fastly.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=cO5mUymTgAgFJWnKGqoJQUmz67qzdX6pyvAot2iSw1I=; b=EhM1rtoDK4P8frf69NguthySI2duHxlT78rGCyVW2af6rlXn7pt8yfok/PPNTS0Kpd 7Al2deNdYeMTq5bsXIMbMsOUm9gq8xGyC/GpPa2/tj4xWg2mcDL2dhWo3h4PQRO/0Avz KWE7yO/y4T5QXA2Q3xZeEMnVNXoI5aiGshWgA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=cO5mUymTgAgFJWnKGqoJQUmz67qzdX6pyvAot2iSw1I=; b=uWepEkMfQ33jnAtjGHOqIxEI7I3OarIMyleaCoHmZm+paQYmBfU9JQNJRwR7xFNLGr yicegWrdzEhqhvu9i0eOsxhH8bK3FZKsyoAADYQZnDmAkCv7/LdKCU1DKTXyFOCgLsIN KGU5p9JZLyT0LzFPnB8rGQErvcW23ijCpFqZqXeuFd1hNpr1LUZt0KL6KlHn3KBPOIlk r4Pbp5oQWmiPskCmYtTHOrUQ8NtSVEdK/HEH6hXrWEd7OnVOEp4DEhkk8DaUru2Ah3BQ BLMRtesEDti/zmc8HADD2V+Bup8vQ65nU23bl+zwyiVTC5g9vKhFziC0JEVSovGv0Wmn n4Lw== X-Gm-Message-State: AOAM5337YSVaJUCQPA1TEbv7HCVuCl4cMyPl+iuHZ6n9PQCC9tA+7jdE dlh25lLrv0z6nDg5tiEHYwqInQ== X-Google-Smtp-Source: ABdhPJwON7W5NUdjfKqny/gy085ggSbv1jnp0KwjsOmVcKUbFLUwoENij9TDxpAVPpcLX8UBbaRAgw== X-Received: by 2002:a17:90b:1c86:: with SMTP id oo6mr8511896pjb.165.1639521791161; Tue, 14 Dec 2021 14:43:11 -0800 (PST) Received: from localhost.localdomain (c-73-223-190-181.hsd1.ca.comcast.net. [73.223.190.181]) by smtp.gmail.com with ESMTPSA id mg12sm3448012pjb.10.2021.12.14.14.43.09 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 14 Dec 2021 14:43:10 -0800 (PST) From: Joe Damato To: intel-wired-lan@lists.osuosl.org Cc: kuba@kernel.org, davem@davemloft.net, netdev@vger.kernel.org, jesse.brandeburg@intel.com, anthony.l.nguyen@intel.com, Joe Damato Subject: [net-queue PATCH 4/5] i40e: Add a stat for tracking pages waived. Date: Tue, 14 Dec 2021 14:42:09 -0800 Message-Id: <1639521730-57226-5-git-send-email-jdamato@fastly.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1639521730-57226-1-git-send-email-jdamato@fastly.com> References: <1639521730-57226-1-git-send-email-jdamato@fastly.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org In some cases, pages can not be reused because they are not associated with the correct NUMA zone. Knowing how often pages are waived helps users to understand the interaction between the driver's memory usage and their system. Pass rx_stats through to i40e_can_reuse_rx_page to allow tracking when pages are waived. The page waive count is accessible via ethtool. Signed-off-by: Joe Damato --- drivers/net/ethernet/intel/i40e/i40e.h | 1 + drivers/net/ethernet/intel/i40e/i40e_ethtool.c | 1 + drivers/net/ethernet/intel/i40e/i40e_main.c | 5 ++++- drivers/net/ethernet/intel/i40e/i40e_txrx.c | 13 ++++++++++--- drivers/net/ethernet/intel/i40e/i40e_txrx.h | 1 + 5 files changed, 17 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h index ab73de2..3774e7b 100644 --- a/drivers/net/ethernet/intel/i40e/i40e.h +++ b/drivers/net/ethernet/intel/i40e/i40e.h @@ -855,6 +855,7 @@ struct i40e_vsi { u64 rx_page_failed; u64 rx_page_reuse; u64 rx_page_alloc; + u64 rx_page_waive; /* These are containers of ring pointers, allocated at run-time */ struct i40e_ring **rx_rings; diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c index 22f746b..224fe6d 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c +++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c @@ -297,6 +297,7 @@ static const struct i40e_stats i40e_gstrings_misc_stats[] = { I40E_VSI_STAT("rx_pg_alloc_fail", rx_page_failed), I40E_VSI_STAT("rx_cache_reuse", rx_page_reuse), I40E_VSI_STAT("rx_cache_alloc", rx_page_alloc), + I40E_VSI_STAT("rx_cache_waive", rx_page_waive), }; /* These PF_STATs might look like duplicates of some NETDEV_STATs, diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c index 33c3f04..ded7aa9 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_main.c +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c @@ -812,7 +812,7 @@ static void i40e_update_vsi_stats(struct i40e_vsi *vsi) struct i40e_eth_stats *es; /* device's eth stats */ u64 tx_restart, tx_busy; struct i40e_ring *p; - u64 rx_page, rx_buf, rx_reuse, rx_alloc; + u64 rx_page, rx_buf, rx_reuse, rx_alloc, rx_waive; u64 bytes, packets; unsigned int start; u64 tx_linearize; @@ -840,6 +840,7 @@ static void i40e_update_vsi_stats(struct i40e_vsi *vsi) rx_buf = 0; rx_reuse = 0; rx_reuse = 0; + rx_waive = 0; rcu_read_lock(); for (q = 0; q < vsi->num_queue_pairs; q++) { /* locate Tx ring */ @@ -875,6 +876,7 @@ static void i40e_update_vsi_stats(struct i40e_vsi *vsi) rx_page += p->rx_stats.alloc_page_failed; rx_reuse += p->rx_stats.page_reuse_count; rx_alloc += p->rx_stats.page_alloc_count; + rx_waive += p->rx_stats.page_waive_count; if (i40e_enabled_xdp_vsi(vsi)) { /* locate XDP ring */ @@ -904,6 +906,7 @@ static void i40e_update_vsi_stats(struct i40e_vsi *vsi) vsi->rx_buf_failed = rx_buf; vsi->rx_page_reuse = rx_reuse; vsi->rx_page_alloc = rx_alloc; + vsi->rx_page_waive = rx_waive; ns->rx_packets = rx_p; ns->rx_bytes = rx_b; diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c index 1450efd..c7ad983 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c @@ -1982,22 +1982,29 @@ static bool i40e_cleanup_headers(struct i40e_ring *rx_ring, struct sk_buff *skb, /** * i40e_can_reuse_rx_page - Determine if page can be reused for another Rx * @rx_buffer: buffer containing the page + * @rx_stats: rx stats structure for the rx ring * @rx_buffer_pgcnt: buffer page refcount pre xdp_do_redirect() call * * If page is reusable, we have a green light for calling i40e_reuse_rx_page, * which will assign the current buffer to the buffer that next_to_alloc is * pointing to; otherwise, the DMA mapping needs to be destroyed and - * page freed + * page freed. + * + * rx_stats will be updated to indicate if the page was waived because it was + * not reusable. */ static bool i40e_can_reuse_rx_page(struct i40e_rx_buffer *rx_buffer, + struct i40e_rx_queue_stats *rx_stats, int rx_buffer_pgcnt) { unsigned int pagecnt_bias = rx_buffer->pagecnt_bias; struct page *page = rx_buffer->page; /* Is any reuse possible? */ - if (!dev_page_is_reusable(page)) + if (!dev_page_is_reusable(page)) { + rx_stats->page_waive_count++; return false; + } #if (PAGE_SIZE < 8192) /* if we are only owner of page we can reuse it */ @@ -2237,7 +2244,7 @@ static void i40e_put_rx_buffer(struct i40e_ring *rx_ring, struct i40e_rx_buffer *rx_buffer, int rx_buffer_pgcnt) { - if (i40e_can_reuse_rx_page(rx_buffer, rx_buffer_pgcnt)) { + if (i40e_can_reuse_rx_page(rx_buffer, &rx_ring->rx_stats, rx_buffer_pgcnt)) { /* hand second half of page back to the ring */ i40e_reuse_rx_page(rx_ring, rx_buffer); } else { diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.h b/drivers/net/ethernet/intel/i40e/i40e_txrx.h index 7041e81..e049cf48 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.h +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.h @@ -300,6 +300,7 @@ struct i40e_rx_queue_stats { u64 page_reuse_count; u64 realloc_count; u64 page_alloc_count; + u64 page_waive_count; }; enum i40e_ring_state_t { From patchwork Tue Dec 14 22:42:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joe Damato X-Patchwork-Id: 12676819 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66834C43217 for ; Tue, 14 Dec 2021 22:43:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234342AbhLNWnO (ORCPT ); Tue, 14 Dec 2021 17:43:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34148 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234225AbhLNWnN (ORCPT ); Tue, 14 Dec 2021 17:43:13 -0500 Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com [IPv6:2607:f8b0:4864:20::631]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 54B8FC061574 for ; Tue, 14 Dec 2021 14:43:13 -0800 (PST) Received: by mail-pl1-x631.google.com with SMTP id n8so14763735plf.4 for ; Tue, 14 Dec 2021 14:43:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fastly.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=9N389LoTGE6mkt5oIUV8Cfwv96Xp7JiTrhAZSKUqRm8=; b=cML2mpwqvMNN/LTebykUwE2fMZaeIvgC07oc10BPfDvg5nrr3JaWt/wN8DiF4JezUg BZg6J38YgSojWHCycSirp7dPxJBAF0wMS0HVKR5guq1G061AUEQdFzqwQehh5DPY+LwX tia7GwRmwhCsGvlVSJc5T2Pfs/ZmCSPOHi2DE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=9N389LoTGE6mkt5oIUV8Cfwv96Xp7JiTrhAZSKUqRm8=; b=rp1TaeNskZoP/Q4M7EYmv+l4ISEzK2vT6tIHB5R1z8J7Y98OcB0Uq/0lZSJPMi1j7J r4ovxLPj5YYPXZHnhddvRjH4sPZp3p6msYnv+LcG0zDsIOAG+DTOds8rxWNfHbVHDIru WmlcVfkGQywQmdxKuG7xVv35mk/SNvh9X237i5VckwxV3LkMZvYJRYA4aKMhOSfXGGzH 4ysoJJUJ+5c7lfcVrG5nVjSoYDQ0UVSRZWdyIM2d78w33UJiywyoYh4oreDrZ0WyXiGl O+hyaBqHAracqCQERCrvWTHkq7EoLwpI+GVpbvq828J6cY2RbVtIBBcQQxVBhXnjXIu3 mQxA== X-Gm-Message-State: AOAM530IiVO7UAbXr0K/Z2DLhwMzcncoZxROPI3HailkuDY00ssFoV93 3XiUaFRXmRsc+Y223xVnP23/qw== X-Google-Smtp-Source: ABdhPJwu8VvQfDP4t4kH5s1EridEC1TGMkQI5RwoCYPjm8LiNZj2QNJLNnq6k8u2Ywe5rhG3FS+d5g== X-Received: by 2002:a17:902:8645:b0:142:8c0d:3f4a with SMTP id y5-20020a170902864500b001428c0d3f4amr8075863plt.3.1639521792792; Tue, 14 Dec 2021 14:43:12 -0800 (PST) Received: from localhost.localdomain (c-73-223-190-181.hsd1.ca.comcast.net. [73.223.190.181]) by smtp.gmail.com with ESMTPSA id mg12sm3448012pjb.10.2021.12.14.14.43.11 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 14 Dec 2021 14:43:12 -0800 (PST) From: Joe Damato To: intel-wired-lan@lists.osuosl.org Cc: kuba@kernel.org, davem@davemloft.net, netdev@vger.kernel.org, jesse.brandeburg@intel.com, anthony.l.nguyen@intel.com, Joe Damato Subject: [net-queue PATCH 5/5] i40e: Add a stat for tracking busy rx pages. Date: Tue, 14 Dec 2021 14:42:10 -0800 Message-Id: <1639521730-57226-6-git-send-email-jdamato@fastly.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1639521730-57226-1-git-send-email-jdamato@fastly.com> References: <1639521730-57226-1-git-send-email-jdamato@fastly.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org In some cases, pages cannot be reused by i40e because the page is busy. Add a counter for this event. Busy page count is accessible via ethtool. Signed-off-by: Joe Damato --- drivers/net/ethernet/intel/i40e/i40e.h | 1 + drivers/net/ethernet/intel/i40e/i40e_ethtool.c | 1 + drivers/net/ethernet/intel/i40e/i40e_main.c | 5 ++++- drivers/net/ethernet/intel/i40e/i40e_txrx.c | 12 ++++++++---- drivers/net/ethernet/intel/i40e/i40e_txrx.h | 1 + 5 files changed, 15 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h index 3774e7b..b50530e 100644 --- a/drivers/net/ethernet/intel/i40e/i40e.h +++ b/drivers/net/ethernet/intel/i40e/i40e.h @@ -856,6 +856,7 @@ struct i40e_vsi { u64 rx_page_reuse; u64 rx_page_alloc; u64 rx_page_waive; + u64 rx_page_busy; /* These are containers of ring pointers, allocated at run-time */ struct i40e_ring **rx_rings; diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c index 224fe6d..64fd869 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c +++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c @@ -298,6 +298,7 @@ static const struct i40e_stats i40e_gstrings_misc_stats[] = { I40E_VSI_STAT("rx_cache_reuse", rx_page_reuse), I40E_VSI_STAT("rx_cache_alloc", rx_page_alloc), I40E_VSI_STAT("rx_cache_waive", rx_page_waive), + I40E_VSI_STAT("rx_cache_busy", rx_page_busy), }; /* These PF_STATs might look like duplicates of some NETDEV_STATs, diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c index ded7aa9..1d9032c 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_main.c +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c @@ -812,7 +812,7 @@ static void i40e_update_vsi_stats(struct i40e_vsi *vsi) struct i40e_eth_stats *es; /* device's eth stats */ u64 tx_restart, tx_busy; struct i40e_ring *p; - u64 rx_page, rx_buf, rx_reuse, rx_alloc, rx_waive; + u64 rx_page, rx_buf, rx_reuse, rx_alloc, rx_waive, rx_busy; u64 bytes, packets; unsigned int start; u64 tx_linearize; @@ -841,6 +841,7 @@ static void i40e_update_vsi_stats(struct i40e_vsi *vsi) rx_reuse = 0; rx_reuse = 0; rx_waive = 0; + rx_busy = 0; rcu_read_lock(); for (q = 0; q < vsi->num_queue_pairs; q++) { /* locate Tx ring */ @@ -877,6 +878,7 @@ static void i40e_update_vsi_stats(struct i40e_vsi *vsi) rx_reuse += p->rx_stats.page_reuse_count; rx_alloc += p->rx_stats.page_alloc_count; rx_waive += p->rx_stats.page_waive_count; + rx_busy += p->rx_stats.page_busy_count; if (i40e_enabled_xdp_vsi(vsi)) { /* locate XDP ring */ @@ -907,6 +909,7 @@ static void i40e_update_vsi_stats(struct i40e_vsi *vsi) vsi->rx_page_reuse = rx_reuse; vsi->rx_page_alloc = rx_alloc; vsi->rx_page_waive = rx_waive; + vsi->rx_page_busy = rx_busy; ns->rx_packets = rx_p; ns->rx_bytes = rx_b; diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c index c7ad983..271697b 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c @@ -1990,8 +1990,8 @@ static bool i40e_cleanup_headers(struct i40e_ring *rx_ring, struct sk_buff *skb, * pointing to; otherwise, the DMA mapping needs to be destroyed and * page freed. * - * rx_stats will be updated to indicate if the page was waived because it was - * not reusable. + * rx_stats will be updated to indicate whether the page was waived + * or busy if it could not be reused. */ static bool i40e_can_reuse_rx_page(struct i40e_rx_buffer *rx_buffer, struct i40e_rx_queue_stats *rx_stats, @@ -2008,13 +2008,17 @@ static bool i40e_can_reuse_rx_page(struct i40e_rx_buffer *rx_buffer, #if (PAGE_SIZE < 8192) /* if we are only owner of page we can reuse it */ - if (unlikely((rx_buffer_pgcnt - pagecnt_bias) > 1)) + if (unlikely((rx_buffer_pgcnt - pagecnt_bias) > 1)) { + rx_stats->page_busy_count++; return false; + } #else #define I40E_LAST_OFFSET \ (SKB_WITH_OVERHEAD(PAGE_SIZE) - I40E_RXBUFFER_2048) - if (rx_buffer->page_offset > I40E_LAST_OFFSET) + if (rx_buffer->page_offset > I40E_LAST_OFFSET) { + rx_stats->page_busy_count++; return false; + } #endif /* If we have drained the page fragment pool we need to update diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.h b/drivers/net/ethernet/intel/i40e/i40e_txrx.h index e049cf48..fd22e2f 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.h +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.h @@ -301,6 +301,7 @@ struct i40e_rx_queue_stats { u64 realloc_count; u64 page_alloc_count; u64 page_waive_count; + u64 page_busy_count; }; enum i40e_ring_state_t {