From patchwork Wed Mar 2 07:55:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joe Damato X-Patchwork-Id: 12765583 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00C5DC433F5 for ; Wed, 2 Mar 2022 07:56:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239916AbiCBH53 (ORCPT ); Wed, 2 Mar 2022 02:57:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36566 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231596AbiCBH52 (ORCPT ); Wed, 2 Mar 2022 02:57:28 -0500 Received: from mail-pf1-x430.google.com (mail-pf1-x430.google.com [IPv6:2607:f8b0:4864:20::430]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D5FB3B65FD for ; Tue, 1 Mar 2022 23:56:45 -0800 (PST) Received: by mail-pf1-x430.google.com with SMTP id t5so1209892pfg.4 for ; Tue, 01 Mar 2022 23:56:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fastly.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=nwQXhyarowCQ+/fX/LL3KslL9YJnrRgt0wJqqdzlIWA=; b=tqygz943Pg0m3cIqr7eja40NDrwNCNQ5RQkhBB4UODkXFuDT9NR6ZK4Dbt2umqE0b/ B16aZTyqLHRsLuSbnLoDhHXRxmGr3haB+GsIhf2hjBIdJYMCzaRaky7xf/4T9ZlDqbud h9Dtjm0v43wjNJq0Fk5CdzbmJha45XSOVPiHE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=nwQXhyarowCQ+/fX/LL3KslL9YJnrRgt0wJqqdzlIWA=; b=ArTTc/7PRgzdrMlF2dmLoEXOnuAGANixoI8rIVfBNXyKDkzkA0PvHLekPZOo5yvtAd Lim9EgPH4zYKAasywUC2X4JRtPZurWsKEVjnVTAnF1CoEasoF+AKWc113YerXFg5IN+h w4q8mbqXjUkO0IFfaYS+fsKKZ1J0AW2zQ2OLyuzipYGQaP1i7EVwcRxweSL2YYxPb7n1 l9c2Gs8ujd9wzTQDw7Q7GWzmg4vXD7oIOkml+XS+scS+S/DZxjhZSm4fy83vB7GgMM3z EyUU1WMPpn5KmGs9Ytr77VT+BhnL8n7HPN12/s1jy0Th9MOpT7gcS4ca2T0LSQBey73T j9wg== X-Gm-Message-State: AOAM533LHrzw2BlHvbs/3rE/WuWo9t8B4EnvTZRxkyY7ks4ITKslUAjT P3D01XkVEnCMOYP4TdtmI6UpKbtL8tVY6g== X-Google-Smtp-Source: ABdhPJwJfP0bTDdBzN0tYzS/eqVBoE4PJWG6BM7PEtj++BJ3rfQFx7NHlpmlgn5IdBmRlnZKx5i9sA== X-Received: by 2002:a63:2c52:0:b0:378:b8a9:165c with SMTP id s79-20020a632c52000000b00378b8a9165cmr10316411pgs.610.1646207805341; Tue, 01 Mar 2022 23:56:45 -0800 (PST) Received: from localhost.localdomain (c-73-223-190-181.hsd1.ca.comcast.net. [73.223.190.181]) by smtp.gmail.com with ESMTPSA id gb9-20020a17090b060900b001beecaf986dsm2237780pjb.52.2022.03.01.23.56.44 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 01 Mar 2022 23:56:44 -0800 (PST) From: Joe Damato To: netdev@vger.kernel.org, kuba@kernel.org, ilias.apalodimas@linaro.org, davem@davemloft.net, hawk@kernel.org, saeed@kernel.org, ttoukan.linux@gmail.com, brouer@redhat.com, leon@kernel.org, linux-rdma@vger.kernel.org, saeedm@nvidia.com Cc: Joe Damato Subject: [net-next v9 1/5] page_pool: Add allocation stats Date: Tue, 1 Mar 2022 23:55:47 -0800 Message-Id: <1646207751-13621-2-git-send-email-jdamato@fastly.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1646207751-13621-1-git-send-email-jdamato@fastly.com> References: <1646207751-13621-1-git-send-email-jdamato@fastly.com> Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Add per-pool statistics counters for the allocation path of a page pool. These stats are incremented in softirq context, so no locking or per-cpu variables are needed. This code is disabled by default and a kernel config option is provided for users who wish to enable them. The statistics added are: - fast: successful fast path allocations - slow: slow path order-0 allocations - slow_high_order: slow path high order allocations - empty: ptr ring is empty, so a slow path allocation was forced. - refill: an allocation which triggered a refill of the cache - waive: pages obtained from the ptr ring that cannot be added to the cache due to a NUMA mismatch. Signed-off-by: Joe Damato Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas --- include/net/page_pool.h | 18 ++++++++++++++++++ net/Kconfig | 13 +++++++++++++ net/core/page_pool.c | 24 ++++++++++++++++++++---- 3 files changed, 51 insertions(+), 4 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 97c3c19..1f27e8a4 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -84,6 +84,19 @@ struct page_pool_params { void *init_arg; }; +#ifdef CONFIG_PAGE_POOL_STATS +struct page_pool_alloc_stats { + u64 fast; /* fast path allocations */ + u64 slow; /* slow-path order 0 allocations */ + u64 slow_high_order; /* slow-path high order allocations */ + u64 empty; /* failed refills due to empty ptr ring, forcing + * slow path allocation + */ + u64 refill; /* allocations via successful refill */ + u64 waive; /* failed refills due to numa zone mismatch */ +}; +#endif + struct page_pool { struct page_pool_params p; @@ -96,6 +109,11 @@ struct page_pool { unsigned int frag_offset; struct page *frag_page; long frag_users; + +#ifdef CONFIG_PAGE_POOL_STATS + /* these stats are incremented while in softirq context */ + struct page_pool_alloc_stats alloc_stats; +#endif u32 xdp_mem_id; /* diff --git a/net/Kconfig b/net/Kconfig index 8a1f9d0..6b78f69 100644 --- a/net/Kconfig +++ b/net/Kconfig @@ -434,6 +434,19 @@ config NET_DEVLINK config PAGE_POOL bool +config PAGE_POOL_STATS + default n + bool "Page pool stats" + depends on PAGE_POOL + help + Enable page pool statistics to track page allocation and recycling + in page pools. This option incurs additional CPU cost in allocation + and recycle paths and additional memory cost to store the statistics. + These statistics are only available if this option is enabled and if + the driver using the page pool supports exporting this data. + + If unsure, say N. + config FAILOVER tristate "Generic failover module" help diff --git a/net/core/page_pool.c b/net/core/page_pool.c index e25d359..0fa4b76 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -26,6 +26,13 @@ #define BIAS_MAX LONG_MAX +#ifdef CONFIG_PAGE_POOL_STATS +/* alloc_stat_inc is intended to be used in softirq context */ +#define alloc_stat_inc(pool, __stat) (pool->alloc_stats.__stat++) +#else +#define alloc_stat_inc(pool, __stat) +#endif + static int page_pool_init(struct page_pool *pool, const struct page_pool_params *params) { @@ -117,8 +124,10 @@ static struct page *page_pool_refill_alloc_cache(struct page_pool *pool) int pref_nid; /* preferred NUMA node */ /* Quicker fallback, avoid locks when ring is empty */ - if (__ptr_ring_empty(r)) + if (__ptr_ring_empty(r)) { + alloc_stat_inc(pool, empty); return NULL; + } /* Softirq guarantee CPU and thus NUMA node is stable. This, * assumes CPU refilling driver RX-ring will also run RX-NAPI. @@ -145,14 +154,17 @@ static struct page *page_pool_refill_alloc_cache(struct page_pool *pool) * This limit stress on page buddy alloactor. */ page_pool_return_page(pool, page); + alloc_stat_inc(pool, waive); page = NULL; break; } } while (pool->alloc.count < PP_ALLOC_CACHE_REFILL); /* Return last page */ - if (likely(pool->alloc.count > 0)) + if (likely(pool->alloc.count > 0)) { page = pool->alloc.cache[--pool->alloc.count]; + alloc_stat_inc(pool, refill); + } return page; } @@ -166,6 +178,7 @@ static struct page *__page_pool_get_cached(struct page_pool *pool) if (likely(pool->alloc.count)) { /* Fast-path */ page = pool->alloc.cache[--pool->alloc.count]; + alloc_stat_inc(pool, fast); } else { page = page_pool_refill_alloc_cache(pool); } @@ -239,6 +252,7 @@ static struct page *__page_pool_alloc_page_order(struct page_pool *pool, return NULL; } + alloc_stat_inc(pool, slow_high_order); page_pool_set_pp_info(pool, page); /* Track how many pages are held 'in-flight' */ @@ -293,10 +307,12 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, } /* Return last page */ - if (likely(pool->alloc.count > 0)) + if (likely(pool->alloc.count > 0)) { page = pool->alloc.cache[--pool->alloc.count]; - else + alloc_stat_inc(pool, slow); + } else { page = NULL; + } /* When page just alloc'ed is should/must have refcnt 1. */ return page; From patchwork Wed Mar 2 07:55:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joe Damato X-Patchwork-Id: 12765584 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B8C1C433F5 for ; Wed, 2 Mar 2022 07:56:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239972AbiCBH5h (ORCPT ); Wed, 2 Mar 2022 02:57:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36592 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239938AbiCBH5a (ORCPT ); Wed, 2 Mar 2022 02:57:30 -0500 Received: from mail-pl1-x62d.google.com (mail-pl1-x62d.google.com [IPv6:2607:f8b0:4864:20::62d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 61F9DB65F7 for ; Tue, 1 Mar 2022 23:56:47 -0800 (PST) Received: by mail-pl1-x62d.google.com with SMTP id c9so927092pll.0 for ; Tue, 01 Mar 2022 23:56:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fastly.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=mPEJhSUR8LHc4x0Gte5MVbalaCT/VfssAnFg5lsZpL8=; b=cKyIctTfVJRKepVFL1BXmsBBZQc7rlza1gR6E67VR5q8anc29gtmyaDkRH04oS77BL gf38KDADnkiu+rOWhRISPSIZBYddiQKQyLpPMEElf/LluiUN2gyNwrrnnGTmd/HTtAM4 vkNhVSPGr9g8wmFzUqx7XuYmOWoFHrA8+IBsw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=mPEJhSUR8LHc4x0Gte5MVbalaCT/VfssAnFg5lsZpL8=; b=fJ4zdTUhjI9Kc9+JOFbMSRAIJg3v7a9iiEpDHTwfBhYL0vCKyOY7vTB6x7V4cZNywK bnMk0k+l+Q/8+Jmi++8bGlslY0LzAJDL/2bu2ZHyx1Q0vcNmN1crWE26kLggGU9uNQmc YFWJbJOYHUdrthpDj8mH6r9heGTKVcvFPGKQJn+s0igvseB2XWcASsV6c5xo90JbOx2l h+NW4Mbrjw5uDNwNkeUytrm6BjNgwDVgaxWjKOS2Nl79Bfica4uzlBfrt6R6csYH9ABF SWkwuhGy0CGjwjx4iEd0FzAsRCHe5ZtltIkP3PtWMNxxEb6E9ZT/cJViHq+k5ugksL96 OfbQ== X-Gm-Message-State: AOAM532eqvFIpvQ5VBPK06cNvcBrHqBieO9D7KHtxnPq0s67eD/Z8f0S 3GD05+EK5dyM4sYKhRYLDxu0Dw== X-Google-Smtp-Source: ABdhPJyqGMvWavWcduixiFwBh5T3uGN4jLsCwdnqd4qXt4KPg7VSwyo35DXdxRFgGzJtwQwlpDezGw== X-Received: by 2002:a17:902:f64e:b0:14d:20db:8478 with SMTP id m14-20020a170902f64e00b0014d20db8478mr29140752plg.158.1646207806881; Tue, 01 Mar 2022 23:56:46 -0800 (PST) Received: from localhost.localdomain (c-73-223-190-181.hsd1.ca.comcast.net. [73.223.190.181]) by smtp.gmail.com with ESMTPSA id gb9-20020a17090b060900b001beecaf986dsm2237780pjb.52.2022.03.01.23.56.45 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 01 Mar 2022 23:56:46 -0800 (PST) From: Joe Damato To: netdev@vger.kernel.org, kuba@kernel.org, ilias.apalodimas@linaro.org, davem@davemloft.net, hawk@kernel.org, saeed@kernel.org, ttoukan.linux@gmail.com, brouer@redhat.com, leon@kernel.org, linux-rdma@vger.kernel.org, saeedm@nvidia.com Cc: Joe Damato Subject: [net-next v9 2/5] page_pool: Add recycle stats Date: Tue, 1 Mar 2022 23:55:48 -0800 Message-Id: <1646207751-13621-3-git-send-email-jdamato@fastly.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1646207751-13621-1-git-send-email-jdamato@fastly.com> References: <1646207751-13621-1-git-send-email-jdamato@fastly.com> Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Add per-cpu stats tracking page pool recycling events: - cached: recycling placed page in the page pool cache - cache_full: page pool cache was full - ring: page placed into the ptr ring - ring_full: page released from page pool because the ptr ring was full - released_refcnt: page released (and not recycled) because refcnt > 1 Signed-off-by: Joe Damato Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas --- include/net/page_pool.h | 16 ++++++++++++++++ net/core/page_pool.c | 30 ++++++++++++++++++++++++++++-- 2 files changed, 44 insertions(+), 2 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 1f27e8a4..298af95 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -95,6 +95,18 @@ struct page_pool_alloc_stats { u64 refill; /* allocations via successful refill */ u64 waive; /* failed refills due to numa zone mismatch */ }; + +struct page_pool_recycle_stats { + u64 cached; /* recycling placed page in the cache. */ + u64 cache_full; /* cache was full */ + u64 ring; /* recycling placed page back into ptr ring */ + u64 ring_full; /* page was released from page-pool because + * PTR ring was full. + */ + u64 released_refcnt; /* page released because of elevated + * refcnt + */ +}; #endif struct page_pool { @@ -144,6 +156,10 @@ struct page_pool { */ struct ptr_ring ring; +#ifdef CONFIG_PAGE_POOL_STATS + /* recycle stats are per-cpu to avoid locking */ + struct page_pool_recycle_stats __percpu *recycle_stats; +#endif atomic_t pages_state_release_cnt; /* A page_pool is strictly tied to a single RX-queue being diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 0fa4b76..3d273cb 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -29,8 +29,15 @@ #ifdef CONFIG_PAGE_POOL_STATS /* alloc_stat_inc is intended to be used in softirq context */ #define alloc_stat_inc(pool, __stat) (pool->alloc_stats.__stat++) +/* recycle_stat_inc is safe to use when preemption is possible. */ +#define recycle_stat_inc(pool, __stat) \ + do { \ + struct page_pool_recycle_stats __percpu *s = pool->recycle_stats; \ + this_cpu_inc(s->__stat); \ + } while (0) #else #define alloc_stat_inc(pool, __stat) +#define recycle_stat_inc(pool, __stat) #endif static int page_pool_init(struct page_pool *pool, @@ -80,6 +87,12 @@ static int page_pool_init(struct page_pool *pool, pool->p.flags & PP_FLAG_PAGE_FRAG) return -EINVAL; +#ifdef CONFIG_PAGE_POOL_STATS + pool->recycle_stats = alloc_percpu(struct page_pool_recycle_stats); + if (!pool->recycle_stats) + return -ENOMEM; +#endif + if (ptr_ring_init(&pool->ring, ring_qsize, GFP_KERNEL) < 0) return -ENOMEM; @@ -410,7 +423,12 @@ static bool page_pool_recycle_in_ring(struct page_pool *pool, struct page *page) else ret = ptr_ring_produce_bh(&pool->ring, page); - return (ret == 0) ? true : false; + if (!ret) { + recycle_stat_inc(pool, ring); + return true; + } + + return false; } /* Only allow direct recycling in special circumstances, into the @@ -421,11 +439,14 @@ static bool page_pool_recycle_in_ring(struct page_pool *pool, struct page *page) static bool page_pool_recycle_in_cache(struct page *page, struct page_pool *pool) { - if (unlikely(pool->alloc.count == PP_ALLOC_CACHE_SIZE)) + if (unlikely(pool->alloc.count == PP_ALLOC_CACHE_SIZE)) { + recycle_stat_inc(pool, cache_full); return false; + } /* Caller MUST have verified/know (page_ref_count(page) == 1) */ pool->alloc.cache[pool->alloc.count++] = page; + recycle_stat_inc(pool, cached); return true; } @@ -475,6 +496,7 @@ __page_pool_put_page(struct page_pool *pool, struct page *page, * doing refcnt based recycle tricks, meaning another process * will be invoking put_page. */ + recycle_stat_inc(pool, released_refcnt); /* Do not replace this with page_pool_return_page() */ page_pool_release_page(pool, page); put_page(page); @@ -488,6 +510,7 @@ void page_pool_put_defragged_page(struct page_pool *pool, struct page *page, page = __page_pool_put_page(pool, page, dma_sync_size, allow_direct); if (page && !page_pool_recycle_in_ring(pool, page)) { /* Cache full, fallback to free pages */ + recycle_stat_inc(pool, ring_full); page_pool_return_page(pool, page); } } @@ -636,6 +659,9 @@ static void page_pool_free(struct page_pool *pool) if (pool->p.flags & PP_FLAG_DMA_MAP) put_device(pool->p.dev); +#ifdef CONFIG_PAGE_POOL_STATS + free_percpu(pool->recycle_stats); +#endif kfree(pool); } From patchwork Wed Mar 2 07:55:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joe Damato X-Patchwork-Id: 12765585 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5CFFC433EF for ; Wed, 2 Mar 2022 07:56:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239938AbiCBH5i (ORCPT ); Wed, 2 Mar 2022 02:57:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36658 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239957AbiCBH5b (ORCPT ); Wed, 2 Mar 2022 02:57:31 -0500 Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com [IPv6:2607:f8b0:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D0E4CB6D14 for ; Tue, 1 Mar 2022 23:56:48 -0800 (PST) Received: by mail-pl1-x62f.google.com with SMTP id i1so902633plr.2 for ; Tue, 01 Mar 2022 23:56:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fastly.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=HPfQh9n09LOH2LrDbaPnNvC5rOCyguVAdFId5ncvNfM=; b=W4CLDVLOmgoqdbcTywJ6SiUonL3zYL12YNWK0gAkjkQKpsxPxWfFzCpv8JtcdE55Jf GntAgEpd0N8/i79K8/X0lwYUi8bA6A9pEtNJMt1rN6YzRh+PhGZwGXFFYEv9CR9QJlbE 9aAXQ6E6LedsuOTWHe7ClijpTqCEEjx8OdvK0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=HPfQh9n09LOH2LrDbaPnNvC5rOCyguVAdFId5ncvNfM=; b=fVcTlzh4KyuygH7DOuPHjXp44nbbi9g23fq87SJTHTJgc2y0+cl/YKkcXyLfAMeBSP aaH1hDIynjEo6jkSJdRdkhVxlbGpycDZVzEt4/mG/3yeHpMxGsCeHZgvqQxM6MholHme ZiuML0m4Iabkk+BZ5TTjaKSQPq1nXXhQhHQcg2qoNiM6CGgYQNj4CMpa7yga7TeAdotJ XfixptL8eZUZr/xXhqaMbhwxKYcB9YxVLGejVuZO9KqWGaAD4ER6RuxJKYNuqOMA9KPS SD1XLP0yOvr8/3M52a+WhZ+nrPfA4YaPBheqxw3j5s36heFpRDJIJ5UFAuYA6rfeWj82 6Gpg== X-Gm-Message-State: AOAM531GHDzXLkltwKcKNJWUyJgm0VwmqvmmlayJZ305iA8SwDNNz3Ah 2poljz0mFHSJYgmkI8twQ9Z5hQ== X-Google-Smtp-Source: ABdhPJz+Eg00dYxOyiEUT1cHkQKs+ewHcDJYqWv0Ke7hxiMGw+pmbDJ83p+PfNPCiRTnRI9XimpeAA== X-Received: by 2002:a17:902:904a:b0:149:b6f1:3c8b with SMTP id w10-20020a170902904a00b00149b6f13c8bmr29873638plz.83.1646207808293; Tue, 01 Mar 2022 23:56:48 -0800 (PST) Received: from localhost.localdomain (c-73-223-190-181.hsd1.ca.comcast.net. [73.223.190.181]) by smtp.gmail.com with ESMTPSA id gb9-20020a17090b060900b001beecaf986dsm2237780pjb.52.2022.03.01.23.56.47 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 01 Mar 2022 23:56:47 -0800 (PST) From: Joe Damato To: netdev@vger.kernel.org, kuba@kernel.org, ilias.apalodimas@linaro.org, davem@davemloft.net, hawk@kernel.org, saeed@kernel.org, ttoukan.linux@gmail.com, brouer@redhat.com, leon@kernel.org, linux-rdma@vger.kernel.org, saeedm@nvidia.com Cc: Joe Damato Subject: [net-next v9 3/5] page_pool: Add function to batch and return stats Date: Tue, 1 Mar 2022 23:55:49 -0800 Message-Id: <1646207751-13621-4-git-send-email-jdamato@fastly.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1646207751-13621-1-git-send-email-jdamato@fastly.com> References: <1646207751-13621-1-git-send-email-jdamato@fastly.com> Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Adds a function page_pool_get_stats which can be used by drivers to obtain stats for a specified page_pool. Signed-off-by: Joe Damato Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas --- include/net/page_pool.h | 17 +++++++++++++++++ net/core/page_pool.c | 25 +++++++++++++++++++++++++ 2 files changed, 42 insertions(+) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 298af95..ea5fb70 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -107,6 +107,23 @@ struct page_pool_recycle_stats { * refcnt */ }; + +/* This struct wraps the above stats structs so users of the + * page_pool_get_stats API can pass a single argument when requesting the + * stats for the page pool. + */ +struct page_pool_stats { + struct page_pool_alloc_stats alloc_stats; + struct page_pool_recycle_stats recycle_stats; +}; + +/* + * Drivers that wish to harvest page pool stats and report them to users + * (perhaps via ethtool, debugfs, or another mechanism) can allocate a + * struct page_pool_stats call page_pool_get_stats to get stats for the specified pool. + */ +bool page_pool_get_stats(struct page_pool *pool, + struct page_pool_stats *stats); #endif struct page_pool { diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 3d273cb..1943c0f 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -35,6 +35,31 @@ struct page_pool_recycle_stats __percpu *s = pool->recycle_stats; \ this_cpu_inc(s->__stat); \ } while (0) + +bool page_pool_get_stats(struct page_pool *pool, + struct page_pool_stats *stats) +{ + int cpu = 0; + + if (!stats) + return false; + + memcpy(&stats->alloc_stats, &pool->alloc_stats, sizeof(pool->alloc_stats)); + + for_each_possible_cpu(cpu) { + const struct page_pool_recycle_stats *pcpu = + per_cpu_ptr(pool->recycle_stats, cpu); + + stats->recycle_stats.cached += pcpu->cached; + stats->recycle_stats.cache_full += pcpu->cache_full; + stats->recycle_stats.ring += pcpu->ring; + stats->recycle_stats.ring_full += pcpu->ring_full; + stats->recycle_stats.released_refcnt += pcpu->released_refcnt; + } + + return true; +} +EXPORT_SYMBOL(page_pool_get_stats); #else #define alloc_stat_inc(pool, __stat) #define recycle_stat_inc(pool, __stat) From patchwork Wed Mar 2 07:55:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joe Damato X-Patchwork-Id: 12765586 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 088F5C433FE for ; Wed, 2 Mar 2022 07:56:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239942AbiCBH5i (ORCPT ); Wed, 2 Mar 2022 02:57:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36810 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239967AbiCBH5h (ORCPT ); Wed, 2 Mar 2022 02:57:37 -0500 Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com [IPv6:2607:f8b0:4864:20::1029]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5474EB6D21 for ; Tue, 1 Mar 2022 23:56:50 -0800 (PST) Received: by mail-pj1-x1029.google.com with SMTP id m13-20020a17090aab0d00b001bbe267d4d1so3510102pjq.0 for ; Tue, 01 Mar 2022 23:56:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fastly.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=lLWiHNOLOpOs7VCXjgiFdK8vhNi5PAalPhtjnGa8SrQ=; b=chcU02UykgvsqGvz/Mp/FoNeTe8MuHJftdX+aASHnlKPIUJJ9NV5wZHoIdcsCTRUG0 pe+PE5XjLSP+/LE4MwLjeV0QQkj4f9Ie5sQmba3yyENOLv2w9KJPpu6KzKmiWutx9JsS EI4n8QyowGsN1VFY/OxmdVl7DL2mNlDhAEyLM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=lLWiHNOLOpOs7VCXjgiFdK8vhNi5PAalPhtjnGa8SrQ=; b=FEeGQ3HMi0PDJ5QXvA2M2ki4xfhrfH7Pte5FzNUMS+bxLzNhSPU/pfMxT7wMJ+UoKl G/0vA9VAytJHRGmzK+3TyKFsVznDR7pOyw9RYETjWzAQzsGNbLSCMUkXXCn+YKRR/QZO LGl61WVo2gMUNEqesBS7RocYcxL+MT1mFSFkounKGiFgASQkUPvhrOS2sMPsxCknqYX0 8JHEVUIj0eeNQAQa8/t+uIuaQtfGqyHuQjli9II+H6zU4x8HoyJyfo7TfT+ftLfkfNQY PuJOt7pm9gCgvixvVPOgoeUS5jiilv+QU7yjYDeIKj2josRUWQgQd9jRvIeqhXIAE2VB JCGA== X-Gm-Message-State: AOAM532mhhJ930YPeeVjZcDg5NbZGbrpimrbb8sl3qZpUakBiuqRz/K9 NDddDatt/9m0nbf3qIoEUd3TZg== X-Google-Smtp-Source: ABdhPJzNR31DfIyQzM6oF9EQabLwCEepj8HKUT+9q0I3Lr+Qtgqjdy6Sc9PI3dLVHAteDOhVD0rt8w== X-Received: by 2002:a17:903:24d:b0:14f:84dd:babb with SMTP id j13-20020a170903024d00b0014f84ddbabbmr29055262plh.47.1646207809815; Tue, 01 Mar 2022 23:56:49 -0800 (PST) Received: from localhost.localdomain (c-73-223-190-181.hsd1.ca.comcast.net. [73.223.190.181]) by smtp.gmail.com with ESMTPSA id gb9-20020a17090b060900b001beecaf986dsm2237780pjb.52.2022.03.01.23.56.48 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 01 Mar 2022 23:56:49 -0800 (PST) From: Joe Damato To: netdev@vger.kernel.org, kuba@kernel.org, ilias.apalodimas@linaro.org, davem@davemloft.net, hawk@kernel.org, saeed@kernel.org, ttoukan.linux@gmail.com, brouer@redhat.com, leon@kernel.org, linux-rdma@vger.kernel.org, saeedm@nvidia.com Cc: Joe Damato Subject: [net-next v9 4/5] Documentation: update networking/page_pool.rst Date: Tue, 1 Mar 2022 23:55:50 -0800 Message-Id: <1646207751-13621-5-git-send-email-jdamato@fastly.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1646207751-13621-1-git-send-email-jdamato@fastly.com> References: <1646207751-13621-1-git-send-email-jdamato@fastly.com> Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Add the new stats API, kernel config parameter, and stats structure information to the page_pool documentation. Signed-off-by: Joe Damato --- Documentation/networking/page_pool.rst | 56 ++++++++++++++++++++++++++++++++++ 1 file changed, 56 insertions(+) diff --git a/Documentation/networking/page_pool.rst b/Documentation/networking/page_pool.rst index a147591..5db8c26 100644 --- a/Documentation/networking/page_pool.rst +++ b/Documentation/networking/page_pool.rst @@ -105,6 +105,47 @@ a page will cause no race conditions is enough. Please note the caller must not use data area after running page_pool_put_page_bulk(), as this function overwrites it. +* page_pool_get_stats(): Retrieve statistics about the page_pool. This API + is only available if the kernel has been configured with + ``CONFIG_PAGE_POOL_STATS=y``. A pointer to a caller allocated ``struct + page_pool_stats`` structure is passed to this API which is filled in. The + caller can then report those stats to the user (perhaps via ethtool, + debugfs, etc.). See below for an example usage of this API. + +Stats API and structures +------------------------ +If the kernel is configured with ``CONFIG_PAGE_POOL_STATS=y``, the API +``page_pool_get_stats()`` and structures described below are available. It +takes a pointer to a ``struct page_pool`` and a pointer to a ``struct +page_pool_stats`` allocated by the caller. + +The API will fill in the provided ``struct page_pool_stats`` with +statistics about the page_pool. + +The stats structure has the following fields:: + + struct page_pool_stats { + struct page_pool_alloc_stats alloc_stats; + struct page_pool_recycle_stats recycle_stats; + }; + + +The ``struct page_pool_alloc_stats`` has the following fields: + * ``fast``: successful fast path allocations + * ``slow``: slow path order-0 allocations + * ``slow_high_order``: slow path high order allocations + * ``empty``: ptr ring is empty, so a slow path allocation was forced. + * ``refill``: an allocation which triggered a refill of the cache + * ``waive``: pages obtained from the ptr ring that cannot be added to + the cache due to a NUMA mismatch. + +The ``struct page_pool_recycle_stats`` has the following fields: + * ``cached``: recycling placed page in the page pool cache + * ``cache_full``: page pool cache was full + * ``ring``: page placed into the ptr ring + * ``ring_full``: page released from page pool because the ptr ring was full + * ``released_refcnt``: page released (and not recycled) because refcnt > 1 + Coding examples =============== @@ -157,6 +198,21 @@ NAPI poller } } +Stats +----- + +.. code-block:: c + + #ifdef CONFIG_PAGE_POOL_STATS + /* retrieve stats */ + struct page_pool_stats stats = { 0 }; + if (page_pool_get_stats(page_pool, &stats)) { + /* perhaps the driver reports statistics with ethool */ + ethtool_print_allocation_stats(&stats.alloc_stats); + ethtool_print_recycle_stats(&stats.recycle_stats); + } + #endif + Driver unload ------------- From patchwork Wed Mar 2 07:55:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joe Damato X-Patchwork-Id: 12765587 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18978C433F5 for ; Wed, 2 Mar 2022 07:57:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239962AbiCBH5k (ORCPT ); Wed, 2 Mar 2022 02:57:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36818 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239969AbiCBH5h (ORCPT ); Wed, 2 Mar 2022 02:57:37 -0500 Received: from mail-pg1-x533.google.com (mail-pg1-x533.google.com [IPv6:2607:f8b0:4864:20::533]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B7EEAB6D2B for ; Tue, 1 Mar 2022 23:56:51 -0800 (PST) Received: by mail-pg1-x533.google.com with SMTP id z4so949151pgh.12 for ; Tue, 01 Mar 2022 23:56:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fastly.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=65wf+C798hgbhu/vNQu4lPxOouO4hdIFIyQo2Xy5Kfw=; b=bTanUznjgQX/pqLttZVy7xJT7SFHZSHMz1JzSbDVumLNTDbKsPNImMnhvWf8X05Nhv MAbhWjI1/wZt9B/LEmho3GIyr20QaxmDsWg1Zom/8OJR5Bnw5nFckSWXRwglwNmLqBjY 3zZymvE+QIn5xr7i+NTyr8RVqh0i325IL5+R0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=65wf+C798hgbhu/vNQu4lPxOouO4hdIFIyQo2Xy5Kfw=; b=OTt8Ot7thZD2q4SPVnJ33W+nmwkcVUDMUGOVQUG5ckRmqYNtVgZIDFLYVqqKAO5TZk RTib3F5UCMg7VAP5FcVk7NfPur8qWxEzyxg9Wr3D5PsDn7LoJqXhVWsY6PwzLGrnW7U3 krvxyFE/9B3SCGi3o6K5tW6ddZPkhWLUj4kufZCB57c5hmrBlyzcd1dCczg0BjlYBa+5 CbOYvZ4Ob0TLZ+89CipFFJDTvfV8A9PRPQNgdOG846/PLOp3g0bTyT1cAVxoZS/EKcHN ks/FkTVAaLZOpW94yuPH59epifuPdc51luSImQKDjkUdfahIm7/O65hUVItt5wgzuLLJ gjrA== X-Gm-Message-State: AOAM531vWTZMbBOE5z/5ywCb7PFwRm0dD+aEQKOuEEAaJKI2X1+XgjCT TQ/8b/A0bdhfsWpjjGrkPX+EvA== X-Google-Smtp-Source: ABdhPJwj2T7oY4NM9m2Gdulh4OhUgT64qeq3UDcQyyAYzC5NYINF7zBhrZ1KXqJvfog02XBoJiYu+Q== X-Received: by 2002:a05:6a00:13aa:b0:4f1:1e5f:1c39 with SMTP id t42-20020a056a0013aa00b004f11e5f1c39mr31628208pfg.24.1646207811282; Tue, 01 Mar 2022 23:56:51 -0800 (PST) Received: from localhost.localdomain (c-73-223-190-181.hsd1.ca.comcast.net. [73.223.190.181]) by smtp.gmail.com with ESMTPSA id gb9-20020a17090b060900b001beecaf986dsm2237780pjb.52.2022.03.01.23.56.50 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 01 Mar 2022 23:56:50 -0800 (PST) From: Joe Damato To: netdev@vger.kernel.org, kuba@kernel.org, ilias.apalodimas@linaro.org, davem@davemloft.net, hawk@kernel.org, saeed@kernel.org, ttoukan.linux@gmail.com, brouer@redhat.com, leon@kernel.org, linux-rdma@vger.kernel.org, saeedm@nvidia.com Cc: Joe Damato Subject: [net-next v9 5/5] mlx5: add support for page_pool_get_stats Date: Tue, 1 Mar 2022 23:55:51 -0800 Message-Id: <1646207751-13621-6-git-send-email-jdamato@fastly.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1646207751-13621-1-git-send-email-jdamato@fastly.com> References: <1646207751-13621-1-git-send-email-jdamato@fastly.com> Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org This change adds support for the page_pool_get_stats API to mlx5. If the user has enabled CONFIG_PAGE_POOL_STATS in their kernel, ethtool will output page pool stats. Signed-off-by: Joe Damato Acked-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en_stats.c | 75 ++++++++++++++++++++++ drivers/net/ethernet/mellanox/mlx5/core/en_stats.h | 27 +++++++- 2 files changed, 101 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c index 2afecc4..336e4d0 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c @@ -37,6 +37,10 @@ #include "en/ptp.h" #include "en/port.h" +#ifdef CONFIG_PAGE_POOL_STATS +#include +#endif + static unsigned int stats_grps_num(struct mlx5e_priv *priv) { return !priv->profile->stats_grps_num ? 0 : @@ -183,6 +187,19 @@ static const struct counter_desc sw_stats_desc[] = { { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_congst_umr) }, { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_arfs_err) }, { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_recover) }, +#ifdef CONFIG_PAGE_POOL_STATS + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_pp_alloc_fast) }, + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_pp_alloc_slow) }, + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_pp_alloc_slow_high_order) }, + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_pp_alloc_empty) }, + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_pp_alloc_refill) }, + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_pp_alloc_waive) }, + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_pp_recycle_cached) }, + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_pp_recycle_cache_full) }, + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_pp_recycle_ring) }, + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_pp_recycle_ring_full) }, + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_pp_recycle_released_ref) }, +#endif #ifdef CONFIG_MLX5_EN_TLS { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_tls_decrypted_packets) }, { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_tls_decrypted_bytes) }, @@ -349,6 +366,19 @@ static void mlx5e_stats_grp_sw_update_stats_rq_stats(struct mlx5e_sw_stats *s, s->rx_congst_umr += rq_stats->congst_umr; s->rx_arfs_err += rq_stats->arfs_err; s->rx_recover += rq_stats->recover; +#ifdef CONFIG_PAGE_POOL_STATS + s->rx_pp_alloc_fast += rq_stats->pp_alloc_fast; + s->rx_pp_alloc_slow += rq_stats->pp_alloc_slow; + s->rx_pp_alloc_empty += rq_stats->pp_alloc_empty; + s->rx_pp_alloc_refill += rq_stats->pp_alloc_refill; + s->rx_pp_alloc_waive += rq_stats->pp_alloc_waive; + s->rx_pp_alloc_slow_high_order += rq_stats->pp_alloc_slow_high_order; + s->rx_pp_recycle_cached += rq_stats->pp_recycle_cached; + s->rx_pp_recycle_cache_full += rq_stats->pp_recycle_cache_full; + s->rx_pp_recycle_ring += rq_stats->pp_recycle_ring; + s->rx_pp_recycle_ring_full += rq_stats->pp_recycle_ring_full; + s->rx_pp_recycle_released_ref += rq_stats->pp_recycle_released_ref; +#endif #ifdef CONFIG_MLX5_EN_TLS s->rx_tls_decrypted_packets += rq_stats->tls_decrypted_packets; s->rx_tls_decrypted_bytes += rq_stats->tls_decrypted_bytes; @@ -455,6 +485,35 @@ static void mlx5e_stats_grp_sw_update_stats_qos(struct mlx5e_priv *priv, } } +#ifdef CONFIG_PAGE_POOL_STATS +static void mlx5e_stats_update_stats_rq_page_pool(struct mlx5e_channel *c) +{ + struct mlx5e_rq_stats *rq_stats = c->rq.stats; + struct page_pool *pool = c->rq.page_pool; + struct page_pool_stats stats = { 0 }; + + if (!page_pool_get_stats(pool, &stats)) + return; + + rq_stats->pp_alloc_fast = stats.alloc_stats.fast; + rq_stats->pp_alloc_slow = stats.alloc_stats.slow; + rq_stats->pp_alloc_slow_high_order = stats.alloc_stats.slow_high_order; + rq_stats->pp_alloc_empty = stats.alloc_stats.empty; + rq_stats->pp_alloc_waive = stats.alloc_stats.waive; + rq_stats->pp_alloc_refill = stats.alloc_stats.refill; + + rq_stats->pp_recycle_cached = stats.recycle_stats.cached; + rq_stats->pp_recycle_cache_full = stats.recycle_stats.cache_full; + rq_stats->pp_recycle_ring = stats.recycle_stats.ring; + rq_stats->pp_recycle_ring_full = stats.recycle_stats.ring_full; + rq_stats->pp_recycle_released_ref = stats.recycle_stats.released_refcnt; +} +#else +static void mlx5e_stats_update_stats_rq_page_pool(struct mlx5e_channel *c) +{ +} +#endif + static MLX5E_DECLARE_STATS_GRP_OP_UPDATE_STATS(sw) { struct mlx5e_sw_stats *s = &priv->stats.sw; @@ -465,8 +524,11 @@ static MLX5E_DECLARE_STATS_GRP_OP_UPDATE_STATS(sw) for (i = 0; i < priv->stats_nch; i++) { struct mlx5e_channel_stats *channel_stats = priv->channel_stats[i]; + int j; + mlx5e_stats_update_stats_rq_page_pool(priv->channels.c[i]); + mlx5e_stats_grp_sw_update_stats_rq_stats(s, &channel_stats->rq); mlx5e_stats_grp_sw_update_stats_xdpsq(s, &channel_stats->rq_xdpsq); mlx5e_stats_grp_sw_update_stats_ch_stats(s, &channel_stats->ch); @@ -1887,6 +1949,19 @@ static const struct counter_desc rq_stats_desc[] = { { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, congst_umr) }, { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, arfs_err) }, { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, recover) }, +#ifdef CONFIG_PAGE_POOL_STATS + { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, pp_alloc_fast) }, + { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, pp_alloc_slow) }, + { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, pp_alloc_slow_high_order) }, + { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, pp_alloc_empty) }, + { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, pp_alloc_refill) }, + { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, pp_alloc_waive) }, + { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, pp_recycle_cached) }, + { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, pp_recycle_cache_full) }, + { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, pp_recycle_ring) }, + { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, pp_recycle_ring_full) }, + { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, pp_recycle_released_ref) }, +#endif #ifdef CONFIG_MLX5_EN_TLS { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, tls_decrypted_packets) }, { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, tls_decrypted_bytes) }, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h index 14eaf92..a7a025d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h @@ -205,7 +205,19 @@ struct mlx5e_sw_stats { u64 ch_aff_change; u64 ch_force_irq; u64 ch_eq_rearm; - +#ifdef CONFIG_PAGE_POOL_STATS + u64 rx_pp_alloc_fast; + u64 rx_pp_alloc_slow; + u64 rx_pp_alloc_slow_high_order; + u64 rx_pp_alloc_empty; + u64 rx_pp_alloc_refill; + u64 rx_pp_alloc_waive; + u64 rx_pp_recycle_cached; + u64 rx_pp_recycle_cache_full; + u64 rx_pp_recycle_ring; + u64 rx_pp_recycle_ring_full; + u64 rx_pp_recycle_released_ref; +#endif #ifdef CONFIG_MLX5_EN_TLS u64 tx_tls_encrypted_packets; u64 tx_tls_encrypted_bytes; @@ -352,6 +364,19 @@ struct mlx5e_rq_stats { u64 congst_umr; u64 arfs_err; u64 recover; +#ifdef CONFIG_PAGE_POOL_STATS + u64 pp_alloc_fast; + u64 pp_alloc_slow; + u64 pp_alloc_slow_high_order; + u64 pp_alloc_empty; + u64 pp_alloc_refill; + u64 pp_alloc_waive; + u64 pp_recycle_cached; + u64 pp_recycle_cache_full; + u64 pp_recycle_ring; + u64 pp_recycle_ring_full; + u64 pp_recycle_released_ref; +#endif #ifdef CONFIG_MLX5_EN_TLS u64 tls_decrypted_packets; u64 tls_decrypted_bytes;