From patchwork Wed Jul 10 20:30:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 13729672 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 390891991A0 for ; Wed, 10 Jul 2024 20:30:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720643454; cv=none; b=so2A+fh/3e5sznFokJ2UrMOGSY2E1XVJfiAn2PsfFkA0hDXrPI72/hRuVk3CUcbHqLTy4xx0S2XaqYZrsbuPXVbSksiAb4rtj28SkOyi2bX1xiaEbRBxkj4c365i3B/HaX+oHMC+3zYKWCRiHmrLOMxoafP57nd+vgunpf89ZaE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720643454; c=relaxed/simple; bh=LGEi8vDx2KMSA4US15jnJR6u60UN+TmTz9nR3GrVwio=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ivK0cELBzLDTtgOFncClX2L9FfI0P+J4ewO9dya4uWpdjQg+BFyYLZp7cFD9GG5rm33Ip417QVigOslTalrTtMLYfsSIbhTvDYEtnDaltbpeOcH3ib6jAAUgths6gVSV8DJqJ1dgwL7raDX1XAzv7y8RoAxpO0gQX6Uv4kT4SG4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=kOECUEwC; arc=none smtp.client-ip=192.198.163.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="kOECUEwC" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1720643453; x=1752179453; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=LGEi8vDx2KMSA4US15jnJR6u60UN+TmTz9nR3GrVwio=; b=kOECUEwCvQJYpaxzek8V+K8a+G1zmbEPJKRyKnxs3ny8IEGFBk46GIuv ghbJ6DkljJSqUatK4gVPGJjjJe3t4IKIe08tn7DXAvPL8776kvmTm8ugK K5qvJVJvmyAdfTbW8VcxXgAKMm35B68Coos+kC5mVZEKb87p9ZpHYAqnZ RAEdBupGcPYAluwR2mVHQqZh7Af9S7jLefb/xR1sEt3lbTT891fTCnSPU LYpHSBa59K0uFGFh3oOjMFIjQofq58ggVbquyhhjNprGYXhQpuJkrLON9 ofshgTYQIwJamjysqjuzhPt521sfDD5DWf6ajAS4vdJzL/evMO+ukPbgY Q==; X-CSE-ConnectionGUID: eiL8i5sOQGanmyIXf5ltqA== X-CSE-MsgGUID: IE4xOLJjTCKjgRDz1FkigA== X-IronPort-AV: E=McAfee;i="6700,10204,11129"; a="12483785" X-IronPort-AV: E=Sophos;i="6.09,198,1716274800"; d="scan'208";a="12483785" Received: from orviesa010.jf.intel.com ([10.64.159.150]) by fmvoesa110.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Jul 2024 13:30:43 -0700 X-CSE-ConnectionGUID: RTw15/F1QAi9uH4GDtqHMQ== X-CSE-MsgGUID: Ni/E0Q6LSKutegaXO4GoIg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,198,1716274800"; d="scan'208";a="48223889" Received: from anguy11-upstream.jf.intel.com ([10.166.9.133]) by orviesa010.jf.intel.com with ESMTP; 10 Jul 2024 13:30:43 -0700 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com, edumazet@google.com, netdev@vger.kernel.org Cc: Alexander Lobakin , anthony.l.nguyen@intel.com, nex.sw.ncis.osdt.itp.upstreaming@intel.com, lihong.yang@intel.com, willemb@google.com, almasrymina@google.com, Przemek Kitszel , Jacob Keller Subject: [PATCH net-next 11/14] idpf: remove legacy Page Pool Ethtool stats Date: Wed, 10 Jul 2024 13:30:27 -0700 Message-ID: <20240710203031.188081-12-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20240710203031.188081-1-anthony.l.nguyen@intel.com> References: <20240710203031.188081-1-anthony.l.nguyen@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Alexander Lobakin Page Pool Ethtool stats are deprecated since the Netlink Page Pool interface introduction. idpf receives big changes in Rx buffer management, including &page_pool layout, so keeping these deprecated stats does only harm, not speaking of that CONFIG_IDPF selects CONFIG_PAGE_POOL_STATS unconditionally, while the latter is often turned off for better performance. Remove all the references to PP stats from the Ethtool code. The stats are still available in their full via the generic Netlink interface. Reviewed-by: Przemek Kitszel Reviewed-by: Jacob Keller Signed-off-by: Alexander Lobakin Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/idpf/Kconfig | 1 - .../net/ethernet/intel/idpf/idpf_ethtool.c | 29 +------------------ 2 files changed, 1 insertion(+), 29 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/Kconfig b/drivers/net/ethernet/intel/idpf/Kconfig index 638484c5723c..1f071143d992 100644 --- a/drivers/net/ethernet/intel/idpf/Kconfig +++ b/drivers/net/ethernet/intel/idpf/Kconfig @@ -7,7 +7,6 @@ config IDPF select DIMLIB select LIBETH select PAGE_POOL - select PAGE_POOL_STATS help This driver supports Intel(R) Infrastructure Data Path Function devices. diff --git a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c index e933fed16c7e..3806ddd3ce4a 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c +++ b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c @@ -565,8 +565,6 @@ static void idpf_get_stat_strings(struct net_device *netdev, u8 *data) for (i = 0; i < vport_config->max_q.max_rxq; i++) idpf_add_qstat_strings(&data, idpf_gstrings_rx_queue_stats, "rx", i); - - page_pool_ethtool_stats_get_strings(data); } /** @@ -600,7 +598,6 @@ static int idpf_get_sset_count(struct net_device *netdev, int sset) struct idpf_netdev_priv *np = netdev_priv(netdev); struct idpf_vport_config *vport_config; u16 max_txq, max_rxq; - unsigned int size; if (sset != ETH_SS_STATS) return -EINVAL; @@ -619,11 +616,8 @@ static int idpf_get_sset_count(struct net_device *netdev, int sset) max_txq = vport_config->max_q.max_txq; max_rxq = vport_config->max_q.max_rxq; - size = IDPF_PORT_STATS_LEN + (IDPF_TX_QUEUE_STATS_LEN * max_txq) + + return IDPF_PORT_STATS_LEN + (IDPF_TX_QUEUE_STATS_LEN * max_txq) + (IDPF_RX_QUEUE_STATS_LEN * max_rxq); - size += page_pool_ethtool_stats_get_count(); - - return size; } /** @@ -876,7 +870,6 @@ static void idpf_get_ethtool_stats(struct net_device *netdev, { struct idpf_netdev_priv *np = netdev_priv(netdev); struct idpf_vport_config *vport_config; - struct page_pool_stats pp_stats = { }; struct idpf_vport *vport; unsigned int total = 0; unsigned int i, j; @@ -946,32 +939,12 @@ static void idpf_get_ethtool_stats(struct net_device *netdev, idpf_add_empty_queue_stats(&data, qtype); else idpf_add_queue_stats(&data, rxq, qtype); - - /* In splitq mode, don't get page pool stats here since - * the pools are attached to the buffer queues - */ - if (is_splitq) - continue; - - if (rxq) - page_pool_get_stats(rxq->pp, &pp_stats); - } - } - - for (i = 0; i < vport->num_rxq_grp; i++) { - for (j = 0; j < vport->num_bufqs_per_qgrp; j++) { - struct idpf_buf_queue *rxbufq = - &vport->rxq_grps[i].splitq.bufq_sets[j].bufq; - - page_pool_get_stats(rxbufq->pp, &pp_stats); } } for (; total < vport_config->max_q.max_rxq; total++) idpf_add_empty_queue_stats(&data, VIRTCHNL2_QUEUE_TYPE_RX); - page_pool_ethtool_stats_get(data, &pp_stats); - rcu_read_unlock(); idpf_vport_ctrl_unlock(netdev);