From patchwork Fri Feb 25 17:41:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joe Damato X-Patchwork-Id: 12760677 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 533F3C433EF for ; Fri, 25 Feb 2022 17:43:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243969AbiBYRne (ORCPT ); Fri, 25 Feb 2022 12:43:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57526 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243980AbiBYRnd (ORCPT ); Fri, 25 Feb 2022 12:43:33 -0500 Received: from mail-pf1-x42f.google.com (mail-pf1-x42f.google.com [IPv6:2607:f8b0:4864:20::42f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CDAD95C65C for ; Fri, 25 Feb 2022 09:42:56 -0800 (PST) Received: by mail-pf1-x42f.google.com with SMTP id g1so5299386pfv.1 for ; Fri, 25 Feb 2022 09:42:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fastly.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=rWBfasxeKYhUZL/qJ5mch0juN35OLhEiuC5a6jWhcMg=; b=ii8S/clnA8oUpRdvUSLO89YU3F2juAYT3wIROzXp5Z1ExassVfTryLVAcljnNB/EEP TIyYQOmL+8Vp1lHVId3/+yJ29KN6cAyaI7gi3kXgHsfCwe97Y+Egq7ms+Pb6f86UQBcK msqouxlfR2muQvAnWcl/vLwoQ+4OeCR456dc8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=rWBfasxeKYhUZL/qJ5mch0juN35OLhEiuC5a6jWhcMg=; b=Il02dBB2iVYwe3M18FeyED5Ig/hCAcMkU19etAoX6cpbk4+s62gACjdt1Dl/dZCvz3 +ER9IoeStTXOk6KyBJM2SQilIZ9lTAUWOtfMJb5X6Py4DgcX7DYdk06gqbzqa/fpPSao Kt45m7Z/n743YbfKQqscjgfb3PgwYTjChnV1oPMwfdlNHKMBRTE5QIwtXhFF3rsYYxc6 tdLDxQSMGWdHrgM+shWMXS8G/wve/fgHUyrMgQLwumNV0k9xmvnxTTLBqas81HMZ0IAx AnOTg1xfrgqWVVEoLVcvgTkltRkUf2DFcudsxEIR8BKMkdDBu63Vpa1jHwdTYXDr53qT f0/g== X-Gm-Message-State: AOAM531+kBnI+KFqqo86kn3pBjs5nEX0NNU0ya6zm1u50+X7AXy3WPGB XH93TapARthy3kYiGLYyGj2JkgYVGKnY2XX76vP1+yee5Ei2ZLTRoZw/C0WDgmpoSJ2+GZ6Ozld zZWCXfLrVatu1T10CDpsSSGWPVWYK9yDb547zuMcSt8H/wQicRFoPiwm7b/of2PLDzruZ X-Google-Smtp-Source: ABdhPJzgqtw43v45mJXkuxwP+R7F6PJ9NgHb/ICm4vkdR7HBh+oNRA6ay09pFFHDGSpgjOQKsS32hA== X-Received: by 2002:a63:de55:0:b0:374:2526:3596 with SMTP id y21-20020a63de55000000b0037425263596mr7136050pgi.592.1645810975851; Fri, 25 Feb 2022 09:42:55 -0800 (PST) Received: from localhost.localdomain (c-73-223-190-181.hsd1.ca.comcast.net. [73.223.190.181]) by smtp.gmail.com with ESMTPSA id h2-20020a656382000000b00370648d902csm3203805pgv.4.2022.02.25.09.42.54 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 25 Feb 2022 09:42:55 -0800 (PST) From: Joe Damato To: netdev@vger.kernel.org, kuba@kernel.org, ilias.apalodimas@linaro.org, davem@davemloft.net, hawk@kernel.org, saeed@kernel.org, ttoukan.linux@gmail.com, brouer@redhat.com Cc: Joe Damato Subject: [net-next v7 1/4] page_pool: Add allocation stats Date: Fri, 25 Feb 2022 09:41:51 -0800 Message-Id: <1645810914-35485-2-git-send-email-jdamato@fastly.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1645810914-35485-1-git-send-email-jdamato@fastly.com> References: <1645810914-35485-1-git-send-email-jdamato@fastly.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Add per-pool statistics counters for the allocation path of a page pool. These stats are incremented in softirq context, so no locking or per-cpu variables are needed. This code is disabled by default and a kernel config option is provided for users who wish to enable them. The statistics added are: - fast: successful fast path allocations - slow: slow path order-0 allocations - slow_high_order: slow path high order allocations - empty: ptr ring is empty, so a slow path allocation was forced. - refill: an allocation which triggered a refill of the cache - waive: pages obtained from the ptr ring that cannot be added to the cache due to a NUMA mismatch. Signed-off-by: Joe Damato Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas --- include/net/page_pool.h | 18 ++++++++++++++++++ net/Kconfig | 13 +++++++++++++ net/core/page_pool.c | 24 ++++++++++++++++++++---- 3 files changed, 51 insertions(+), 4 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 97c3c19..1f27e8a4 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -84,6 +84,19 @@ struct page_pool_params { void *init_arg; }; +#ifdef CONFIG_PAGE_POOL_STATS +struct page_pool_alloc_stats { + u64 fast; /* fast path allocations */ + u64 slow; /* slow-path order 0 allocations */ + u64 slow_high_order; /* slow-path high order allocations */ + u64 empty; /* failed refills due to empty ptr ring, forcing + * slow path allocation + */ + u64 refill; /* allocations via successful refill */ + u64 waive; /* failed refills due to numa zone mismatch */ +}; +#endif + struct page_pool { struct page_pool_params p; @@ -96,6 +109,11 @@ struct page_pool { unsigned int frag_offset; struct page *frag_page; long frag_users; + +#ifdef CONFIG_PAGE_POOL_STATS + /* these stats are incremented while in softirq context */ + struct page_pool_alloc_stats alloc_stats; +#endif u32 xdp_mem_id; /* diff --git a/net/Kconfig b/net/Kconfig index 8a1f9d0..6b78f69 100644 --- a/net/Kconfig +++ b/net/Kconfig @@ -434,6 +434,19 @@ config NET_DEVLINK config PAGE_POOL bool +config PAGE_POOL_STATS + default n + bool "Page pool stats" + depends on PAGE_POOL + help + Enable page pool statistics to track page allocation and recycling + in page pools. This option incurs additional CPU cost in allocation + and recycle paths and additional memory cost to store the statistics. + These statistics are only available if this option is enabled and if + the driver using the page pool supports exporting this data. + + If unsure, say N. + config FAILOVER tristate "Generic failover module" help diff --git a/net/core/page_pool.c b/net/core/page_pool.c index e25d359..0fa4b76 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -26,6 +26,13 @@ #define BIAS_MAX LONG_MAX +#ifdef CONFIG_PAGE_POOL_STATS +/* alloc_stat_inc is intended to be used in softirq context */ +#define alloc_stat_inc(pool, __stat) (pool->alloc_stats.__stat++) +#else +#define alloc_stat_inc(pool, __stat) +#endif + static int page_pool_init(struct page_pool *pool, const struct page_pool_params *params) { @@ -117,8 +124,10 @@ static struct page *page_pool_refill_alloc_cache(struct page_pool *pool) int pref_nid; /* preferred NUMA node */ /* Quicker fallback, avoid locks when ring is empty */ - if (__ptr_ring_empty(r)) + if (__ptr_ring_empty(r)) { + alloc_stat_inc(pool, empty); return NULL; + } /* Softirq guarantee CPU and thus NUMA node is stable. This, * assumes CPU refilling driver RX-ring will also run RX-NAPI. @@ -145,14 +154,17 @@ static struct page *page_pool_refill_alloc_cache(struct page_pool *pool) * This limit stress on page buddy alloactor. */ page_pool_return_page(pool, page); + alloc_stat_inc(pool, waive); page = NULL; break; } } while (pool->alloc.count < PP_ALLOC_CACHE_REFILL); /* Return last page */ - if (likely(pool->alloc.count > 0)) + if (likely(pool->alloc.count > 0)) { page = pool->alloc.cache[--pool->alloc.count]; + alloc_stat_inc(pool, refill); + } return page; } @@ -166,6 +178,7 @@ static struct page *__page_pool_get_cached(struct page_pool *pool) if (likely(pool->alloc.count)) { /* Fast-path */ page = pool->alloc.cache[--pool->alloc.count]; + alloc_stat_inc(pool, fast); } else { page = page_pool_refill_alloc_cache(pool); } @@ -239,6 +252,7 @@ static struct page *__page_pool_alloc_page_order(struct page_pool *pool, return NULL; } + alloc_stat_inc(pool, slow_high_order); page_pool_set_pp_info(pool, page); /* Track how many pages are held 'in-flight' */ @@ -293,10 +307,12 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, } /* Return last page */ - if (likely(pool->alloc.count > 0)) + if (likely(pool->alloc.count > 0)) { page = pool->alloc.cache[--pool->alloc.count]; - else + alloc_stat_inc(pool, slow); + } else { page = NULL; + } /* When page just alloc'ed is should/must have refcnt 1. */ return page; From patchwork Fri Feb 25 17:41:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joe Damato X-Patchwork-Id: 12760679 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A248EC433FE for ; Fri, 25 Feb 2022 17:43:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243975AbiBYRng (ORCPT ); Fri, 25 Feb 2022 12:43:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57528 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243977AbiBYRnd (ORCPT ); Fri, 25 Feb 2022 12:43:33 -0500 Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com [IPv6:2607:f8b0:4864:20::102d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 392231480F9 for ; Fri, 25 Feb 2022 09:42:58 -0800 (PST) Received: by mail-pj1-x102d.google.com with SMTP id bx5so5388719pjb.3 for ; Fri, 25 Feb 2022 09:42:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fastly.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=5OUr2Rp4zMezR7mBfMQ6FNqQTtpBOzvPMjwGBZOVKdk=; b=b3qHilancj/+h9o3L2VVQpeExfUaQ5rNw10L01L4WahJGctvD0nmJsklqR5VkLGA2G Tp7gHizRZhExluo80r251JZZHbIQZmeDMTcZHzRcX+8cA+fu1wz0msUZNpW4Ri3d7Cqi mNwYJHpT3S/SEX0aQDl0nv1TxjlJJDfThSHyI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=5OUr2Rp4zMezR7mBfMQ6FNqQTtpBOzvPMjwGBZOVKdk=; b=545kHxDuD/CL7Eq7F375rLJ2vC+ZbID5Y4ESXyVUBU5g7jxCPgKTQ+grF1dg/kjUC7 9tihqWk0S0P7Q8kEWPuxStq4+Jj13VmCnxS388HceGT88N0QBXdEvo+mNYR3FEz2xV+D UGHv8BaUazRjRQqZNjGUQi/44eBHutpKvXbwTPTWvI8AuSGecfQjFgVHRl7qNLrXRQ4W RY+xUzVpnePWoo1r/CJRiKkWLFNyRT5HlGxWmcldxNviLP0X/ufVej91pzlMa1WZw1Wc viy7FmWwopVdcyb+goUXxpZ8uoI2OvABJCgJZy4BovhbydJNNhSyiGOY67IOzT6jqz9O Ghtg== X-Gm-Message-State: AOAM5335ds7Fz2lWAdIz76esVCUK+HCf96ziaXuddNcsap9ZobYbSara Wr9gxNdlKkhdGeVUWk7OQJY40FCAJ3gstXQfXj97pZ/JMCPcSl/SmdZ+kc+rlLmRgs2O8a6x9L2 FMNTHrgnMccQh/gjjmYqvvx98icUnvuJamZQkGeUwP1DUPR8yEXA8Bo0X66f9ou9Bp6i9 X-Google-Smtp-Source: ABdhPJzkbWZNRLZfV58dCd43OZ/Y6Ypp3iSw68TnnELytk64F0iJCWFP6xukuoGRcl4gSrjNiH114Q== X-Received: by 2002:a17:903:1249:b0:14e:e053:c8b6 with SMTP id u9-20020a170903124900b0014ee053c8b6mr8580076plh.132.1645810977583; Fri, 25 Feb 2022 09:42:57 -0800 (PST) Received: from localhost.localdomain (c-73-223-190-181.hsd1.ca.comcast.net. [73.223.190.181]) by smtp.gmail.com with ESMTPSA id h2-20020a656382000000b00370648d902csm3203805pgv.4.2022.02.25.09.42.56 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 25 Feb 2022 09:42:57 -0800 (PST) From: Joe Damato To: netdev@vger.kernel.org, kuba@kernel.org, ilias.apalodimas@linaro.org, davem@davemloft.net, hawk@kernel.org, saeed@kernel.org, ttoukan.linux@gmail.com, brouer@redhat.com Cc: Joe Damato Subject: [net-next v7 2/4] page_pool: Add recycle stats Date: Fri, 25 Feb 2022 09:41:52 -0800 Message-Id: <1645810914-35485-3-git-send-email-jdamato@fastly.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1645810914-35485-1-git-send-email-jdamato@fastly.com> References: <1645810914-35485-1-git-send-email-jdamato@fastly.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Add per-cpu stats tracking page pool recycling events: - cached: recycling placed page in the page pool cache - cache_full: page pool cache was full - ring: page placed into the ptr ring - ring_full: page released from page pool because the ptr ring was full - released_refcnt: page released (and not recycled) because refcnt > 1 Signed-off-by: Joe Damato Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas --- include/net/page_pool.h | 16 ++++++++++++++++ net/core/page_pool.c | 28 +++++++++++++++++++++++++++- 2 files changed, 43 insertions(+), 1 deletion(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 1f27e8a4..298af95 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -95,6 +95,18 @@ struct page_pool_alloc_stats { u64 refill; /* allocations via successful refill */ u64 waive; /* failed refills due to numa zone mismatch */ }; + +struct page_pool_recycle_stats { + u64 cached; /* recycling placed page in the cache. */ + u64 cache_full; /* cache was full */ + u64 ring; /* recycling placed page back into ptr ring */ + u64 ring_full; /* page was released from page-pool because + * PTR ring was full. + */ + u64 released_refcnt; /* page released because of elevated + * refcnt + */ +}; #endif struct page_pool { @@ -144,6 +156,10 @@ struct page_pool { */ struct ptr_ring ring; +#ifdef CONFIG_PAGE_POOL_STATS + /* recycle stats are per-cpu to avoid locking */ + struct page_pool_recycle_stats __percpu *recycle_stats; +#endif atomic_t pages_state_release_cnt; /* A page_pool is strictly tied to a single RX-queue being diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 0fa4b76..27233bf 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -29,8 +29,15 @@ #ifdef CONFIG_PAGE_POOL_STATS /* alloc_stat_inc is intended to be used in softirq context */ #define alloc_stat_inc(pool, __stat) (pool->alloc_stats.__stat++) +/* recycle_stat_inc is safe to use when preemption is possible. */ +#define recycle_stat_inc(pool, __stat) \ + do { \ + struct page_pool_recycle_stats __percpu *s = pool->recycle_stats; \ + this_cpu_inc(s->__stat); \ + } while (0) #else #define alloc_stat_inc(pool, __stat) +#define recycle_stat_inc(pool, __stat) #endif static int page_pool_init(struct page_pool *pool, @@ -80,6 +87,12 @@ static int page_pool_init(struct page_pool *pool, pool->p.flags & PP_FLAG_PAGE_FRAG) return -EINVAL; +#ifdef CONFIG_PAGE_POOL_STATS + pool->recycle_stats = alloc_percpu(struct page_pool_recycle_stats); + if (!pool->recycle_stats) + return -ENOMEM; +#endif + if (ptr_ring_init(&pool->ring, ring_qsize, GFP_KERNEL) < 0) return -ENOMEM; @@ -410,6 +423,11 @@ static bool page_pool_recycle_in_ring(struct page_pool *pool, struct page *page) else ret = ptr_ring_produce_bh(&pool->ring, page); +#ifdef CONFIG_PAGE_POOL_STATS + if (ret == 0) + recycle_stat_inc(pool, ring); +#endif + return (ret == 0) ? true : false; } @@ -421,11 +439,14 @@ static bool page_pool_recycle_in_ring(struct page_pool *pool, struct page *page) static bool page_pool_recycle_in_cache(struct page *page, struct page_pool *pool) { - if (unlikely(pool->alloc.count == PP_ALLOC_CACHE_SIZE)) + if (unlikely(pool->alloc.count == PP_ALLOC_CACHE_SIZE)) { + recycle_stat_inc(pool, cache_full); return false; + } /* Caller MUST have verified/know (page_ref_count(page) == 1) */ pool->alloc.cache[pool->alloc.count++] = page; + recycle_stat_inc(pool, cached); return true; } @@ -475,6 +496,7 @@ __page_pool_put_page(struct page_pool *pool, struct page *page, * doing refcnt based recycle tricks, meaning another process * will be invoking put_page. */ + recycle_stat_inc(pool, released_refcnt); /* Do not replace this with page_pool_return_page() */ page_pool_release_page(pool, page); put_page(page); @@ -488,6 +510,7 @@ void page_pool_put_defragged_page(struct page_pool *pool, struct page *page, page = __page_pool_put_page(pool, page, dma_sync_size, allow_direct); if (page && !page_pool_recycle_in_ring(pool, page)) { /* Cache full, fallback to free pages */ + recycle_stat_inc(pool, ring_full); page_pool_return_page(pool, page); } } @@ -636,6 +659,9 @@ static void page_pool_free(struct page_pool *pool) if (pool->p.flags & PP_FLAG_DMA_MAP) put_device(pool->p.dev); +#ifdef CONFIG_PAGE_POOL_STATS + free_percpu(pool->recycle_stats); +#endif kfree(pool); } From patchwork Fri Feb 25 17:41:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joe Damato X-Patchwork-Id: 12760680 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D62AAC4332F for ; Fri, 25 Feb 2022 17:43:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243979AbiBYRnh (ORCPT ); Fri, 25 Feb 2022 12:43:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57532 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243978AbiBYRnd (ORCPT ); Fri, 25 Feb 2022 12:43:33 -0500 Received: from mail-pf1-x432.google.com (mail-pf1-x432.google.com [IPv6:2607:f8b0:4864:20::432]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 63ED414A057 for ; Fri, 25 Feb 2022 09:43:00 -0800 (PST) Received: by mail-pf1-x432.google.com with SMTP id z15so5282901pfe.7 for ; Fri, 25 Feb 2022 09:43:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fastly.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=NML1gCRz6Q1qHytm3I5emxxotjmWSXu6xe5DQOPB1bg=; b=oR78uVOiJRaez3fuwI8HNLU3GSQW4lAfExenU48StMXtyEq20LZ6ERo/MvLdaM6wyN E2GoP87iMEYWwlmiubSTPzgmMyL7q+wwGT/W5M+k9RH7c81ZmaN18dd5gdwYUuO9y30J y3wAr03noJE4L+yxTGB9rKVAQ44j8S7e2V338= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=NML1gCRz6Q1qHytm3I5emxxotjmWSXu6xe5DQOPB1bg=; b=hIaf9bhhDbj4TfoccnwmO1YvD22Ficfs8FgbJ7z+CNNeWtbbnvpweVj7QcF3QGn9yl yIertAAj8UKTqWSt3ntWv1mMhg7SCh9KP6PzReZMKsFFkd8OfyPbInwTd6XoDpn9qX1i Z8Qi8Dqs6rG8AdeYLx1yunIhJCgfrpdiaHyYCG3nXU/uouc026MQ6QvxCeQenbH/BcsZ H4PpDEWZT7JuslR/nOrDI1sVZgttSD6cYI9e2RPyTP8jnhyOwtZKQQUTe1NyLALWN7/P CUJJDYOAq3SWNBZnWZf7eOc+Nb9Yo9t1fRrYkbXocjb0mLJGhHeS2QLg0abCWVw01NWJ d+MA== X-Gm-Message-State: AOAM530GCCI1yqfM5tIQe6S6KCc+dJULxzvx2MGdVLSGqpWe0b3SNsyu Ms+G0u4Ho1xduTz25qXOqL56MyjA/vk5T5nO1xbRu0Q4VUxbQYBucQirRQ/z5TfPvhfKtWDwb67 2LQRP8UBahNhwH142UJ13sY9K3aySItUgwPVJozvzcHlBbfbi6psiGDAaJ5dXQxZV9k1k X-Google-Smtp-Source: ABdhPJx3l5mdnTNW0npz7+KjC5mAyvBjLi7bbORKe2ddSr6ot2PRdVntUJYpIwUnsibZwVf3h2Umbg== X-Received: by 2002:a63:894a:0:b0:365:8dbf:cd0d with SMTP id v71-20020a63894a000000b003658dbfcd0dmr6922059pgd.5.1645810979292; Fri, 25 Feb 2022 09:42:59 -0800 (PST) Received: from localhost.localdomain (c-73-223-190-181.hsd1.ca.comcast.net. [73.223.190.181]) by smtp.gmail.com with ESMTPSA id h2-20020a656382000000b00370648d902csm3203805pgv.4.2022.02.25.09.42.57 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 25 Feb 2022 09:42:58 -0800 (PST) From: Joe Damato To: netdev@vger.kernel.org, kuba@kernel.org, ilias.apalodimas@linaro.org, davem@davemloft.net, hawk@kernel.org, saeed@kernel.org, ttoukan.linux@gmail.com, brouer@redhat.com Cc: Joe Damato Subject: [net-next v7 3/4] page_pool: Add function to batch and return stats Date: Fri, 25 Feb 2022 09:41:53 -0800 Message-Id: <1645810914-35485-4-git-send-email-jdamato@fastly.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1645810914-35485-1-git-send-email-jdamato@fastly.com> References: <1645810914-35485-1-git-send-email-jdamato@fastly.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Adds a function page_pool_get_stats which can be used by drivers to obtain stats for a specified page_pool. Signed-off-by: Joe Damato Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas --- include/net/page_pool.h | 17 +++++++++++++++++ net/core/page_pool.c | 25 +++++++++++++++++++++++++ 2 files changed, 42 insertions(+) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 298af95..ea5fb70 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -107,6 +107,23 @@ struct page_pool_recycle_stats { * refcnt */ }; + +/* This struct wraps the above stats structs so users of the + * page_pool_get_stats API can pass a single argument when requesting the + * stats for the page pool. + */ +struct page_pool_stats { + struct page_pool_alloc_stats alloc_stats; + struct page_pool_recycle_stats recycle_stats; +}; + +/* + * Drivers that wish to harvest page pool stats and report them to users + * (perhaps via ethtool, debugfs, or another mechanism) can allocate a + * struct page_pool_stats call page_pool_get_stats to get stats for the specified pool. + */ +bool page_pool_get_stats(struct page_pool *pool, + struct page_pool_stats *stats); #endif struct page_pool { diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 27233bf..f4f8f5f 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -35,6 +35,31 @@ struct page_pool_recycle_stats __percpu *s = pool->recycle_stats; \ this_cpu_inc(s->__stat); \ } while (0) + +bool page_pool_get_stats(struct page_pool *pool, + struct page_pool_stats *stats) +{ + int cpu = 0; + + if (!stats) + return false; + + memcpy(&stats->alloc_stats, &pool->alloc_stats, sizeof(pool->alloc_stats)); + + for_each_possible_cpu(cpu) { + const struct page_pool_recycle_stats *pcpu = + per_cpu_ptr(pool->recycle_stats, cpu); + + stats->recycle_stats.cached += pcpu->cached; + stats->recycle_stats.cache_full += pcpu->cache_full; + stats->recycle_stats.ring += pcpu->ring; + stats->recycle_stats.ring_full += pcpu->ring_full; + stats->recycle_stats.released_refcnt += pcpu->released_refcnt; + } + + return true; +} +EXPORT_SYMBOL(page_pool_get_stats); #else #define alloc_stat_inc(pool, __stat) #define recycle_stat_inc(pool, __stat) From patchwork Fri Feb 25 17:41:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joe Damato X-Patchwork-Id: 12760681 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 325A5C433EF for ; Fri, 25 Feb 2022 17:43:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243980AbiBYRni (ORCPT ); Fri, 25 Feb 2022 12:43:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57646 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243973AbiBYRnf (ORCPT ); Fri, 25 Feb 2022 12:43:35 -0500 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B19C5888F8 for ; Fri, 25 Feb 2022 09:43:02 -0800 (PST) Received: by mail-pj1-x102e.google.com with SMTP id ev16-20020a17090aead000b001bc3835fea8so5492653pjb.0 for ; Fri, 25 Feb 2022 09:43:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fastly.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=B9rx5xp18pDRfJ+m+9jsqxPiJs7lmmKiX4Q7XUwW1Bk=; b=hFfKZCtR7BOF40yS8yrpu6rln0Mw4JOUWTXMJN4J/k5RC4/rA6DDK4fatywc8Ri5pt dG5IyRSV3bOrAqzVULEPE6aL4XYBCSAKu/C/JH0SR2voLOrGSzP9WrgooeBt6JFHNneR sROFHbzXsIqNegn0a8ROhO3tGAvbuOh6ErE0c= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=B9rx5xp18pDRfJ+m+9jsqxPiJs7lmmKiX4Q7XUwW1Bk=; b=6KYzrX4zqQ66aWSE/LB3jU7vLjfs4wM4GnsIrTdpHMYDAuknfENgUQUC5Dv6kNd0RR 4ll0ZW2fKOE7Zyb27IKXfhlkTBFEs3tVKrJTukX1uJhmL23qXf13zz68Bbu/On5oOI/e 6c4ADbE1cL7ksAAvyHfsIBvr6hkVVAXALFior2lnb44oEn4K0N5CC4x0m9veCOT1cDbB NALN3r1g/ZKrpwaFQG1nF8r8WhI0vXha/LrmY5gQtLAWRe5v9Lst+dprynuXwMkptiQJ 0DyGRn4mJ2rkrIFFUgx39wO5aId4osSNbUpsx8A6ra+c0qn8RFWyD8KZQOnUd60Blniw o6BQ== X-Gm-Message-State: AOAM533VpBxroO51DCFTstEUv21vKVNkKHBo55mGRKY04cxoXzjWm5lG n7ZEbsAFu5eL51x5+cntOffe7kXuN0pjZjFxuho0+anPdTz1lWLQf+zRlVF8liCf9QlncyqfI8e DySr/Yzwc0G2fHLBH7Yn2syJEO0gzEpVYN8r3UVfBRHL+6I+lYpNFW3geuXh3A0Yj06Vo X-Google-Smtp-Source: ABdhPJz5rstHCDvczZaiMF3lpEweoFRe2Eqk12u3J0yHCuDASb2uIoMY3pP+62TF6uSLXCOnWwlPcg== X-Received: by 2002:a17:90a:9f90:b0:1bc:7e7c:2ea7 with SMTP id o16-20020a17090a9f9000b001bc7e7c2ea7mr4165873pjp.64.1645810981036; Fri, 25 Feb 2022 09:43:01 -0800 (PST) Received: from localhost.localdomain (c-73-223-190-181.hsd1.ca.comcast.net. [73.223.190.181]) by smtp.gmail.com with ESMTPSA id h2-20020a656382000000b00370648d902csm3203805pgv.4.2022.02.25.09.42.59 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 25 Feb 2022 09:43:00 -0800 (PST) From: Joe Damato To: netdev@vger.kernel.org, kuba@kernel.org, ilias.apalodimas@linaro.org, davem@davemloft.net, hawk@kernel.org, saeed@kernel.org, ttoukan.linux@gmail.com, brouer@redhat.com Cc: Joe Damato Subject: [net-next v7 4/4] mlx5: add support for page_pool_get_stats Date: Fri, 25 Feb 2022 09:41:54 -0800 Message-Id: <1645810914-35485-5-git-send-email-jdamato@fastly.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1645810914-35485-1-git-send-email-jdamato@fastly.com> References: <1645810914-35485-1-git-send-email-jdamato@fastly.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org This change adds support for the page_pool_get_stats API to mlx5. If the user has enabled CONFIG_PAGE_POOL_STATS in their kernel, ethtool will output page pool stats. Signed-off-by: Joe Damato --- drivers/net/ethernet/mellanox/mlx5/core/en_stats.c | 76 ++++++++++++++++++++++ drivers/net/ethernet/mellanox/mlx5/core/en_stats.h | 27 +++++++- 2 files changed, 102 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c index 3e5d8c7..56eedf5 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c @@ -37,6 +37,10 @@ #include "en/ptp.h" #include "en/port.h" +#ifdef CONFIG_PAGE_POOL_STATS +#include +#endif + static unsigned int stats_grps_num(struct mlx5e_priv *priv) { return !priv->profile->stats_grps_num ? 0 : @@ -183,6 +187,19 @@ static const struct counter_desc sw_stats_desc[] = { { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_congst_umr) }, { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_arfs_err) }, { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_recover) }, +#ifdef CONFIG_PAGE_POOL_STATS + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_page_pool_fast) }, + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_page_pool_slow) }, + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_page_pool_slow_high_order) }, + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_page_pool_empty) }, + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_page_pool_refill) }, + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_page_pool_waive) }, + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_page_pool_rec_cached) }, + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_page_pool_rec_cache_full) }, + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_page_pool_rec_ring) }, + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_page_pool_rec_ring_full) }, + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_page_pool_rec_released_ref) }, +#endif #ifdef CONFIG_MLX5_EN_TLS { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_tls_decrypted_packets) }, { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_tls_decrypted_bytes) }, @@ -349,6 +366,20 @@ static void mlx5e_stats_grp_sw_update_stats_rq_stats(struct mlx5e_sw_stats *s, s->rx_congst_umr += rq_stats->congst_umr; s->rx_arfs_err += rq_stats->arfs_err; s->rx_recover += rq_stats->recover; +#ifdef CONFIG_PAGE_POOL_STATS + s->rx_page_pool_fast += rq_stats->page_pool_fast; + s->rx_page_pool_slow += rq_stats->page_pool_slow; + s->rx_page_pool_empty += rq_stats->page_pool_empty; + s->rx_page_pool_refill += rq_stats->page_pool_refill; + s->rx_page_pool_waive += rq_stats->page_pool_waive; + + s->rx_page_pool_slow_high_order += rq_stats->page_pool_slow_high_order; + s->rx_page_pool_rec_cached += rq_stats->page_pool_rec_cached; + s->rx_page_pool_rec_cache_full += rq_stats->page_pool_rec_cache_full; + s->rx_page_pool_rec_ring += rq_stats->page_pool_rec_ring; + s->rx_page_pool_rec_ring_full += rq_stats->page_pool_rec_ring_full; + s->rx_page_pool_rec_released_ref += rq_stats->page_pool_rec_released_ref; +#endif #ifdef CONFIG_MLX5_EN_TLS s->rx_tls_decrypted_packets += rq_stats->tls_decrypted_packets; s->rx_tls_decrypted_bytes += rq_stats->tls_decrypted_bytes; @@ -455,6 +486,35 @@ static void mlx5e_stats_grp_sw_update_stats_qos(struct mlx5e_priv *priv, } } +#ifdef CONFIG_PAGE_POOL_STATS +static void mlx5e_stats_update_stats_rq_page_pool(struct mlx5e_channel *c) +{ + struct mlx5e_rq_stats *rq_stats = c->rq.stats; + struct page_pool *pool = c->rq.page_pool; + struct page_pool_stats stats = { 0 }; + + if (!page_pool_get_stats(pool, &stats)) + return; + + rq_stats->page_pool_fast = stats.alloc_stats.fast; + rq_stats->page_pool_slow = stats.alloc_stats.slow; + rq_stats->page_pool_slow_high_order = stats.alloc_stats.slow_high_order; + rq_stats->page_pool_empty = stats.alloc_stats.empty; + rq_stats->page_pool_waive = stats.alloc_stats.waive; + rq_stats->page_pool_refill = stats.alloc_stats.refill; + + rq_stats->page_pool_rec_cached = stats.recycle_stats.cached; + rq_stats->page_pool_rec_cache_full = stats.recycle_stats.cache_full; + rq_stats->page_pool_rec_ring = stats.recycle_stats.ring; + rq_stats->page_pool_rec_ring_full = stats.recycle_stats.ring_full; + rq_stats->page_pool_rec_released_ref = stats.recycle_stats.released_refcnt; +} +#else +static void mlx5e_stats_update_stats_rq_page_pool(struct mlx5e_channel *c) +{ +} +#endif + static MLX5E_DECLARE_STATS_GRP_OP_UPDATE_STATS(sw) { struct mlx5e_sw_stats *s = &priv->stats.sw; @@ -465,8 +525,11 @@ static MLX5E_DECLARE_STATS_GRP_OP_UPDATE_STATS(sw) for (i = 0; i < priv->stats_nch; i++) { struct mlx5e_channel_stats *channel_stats = priv->channel_stats[i]; + int j; + mlx5e_stats_update_stats_rq_page_pool(priv->channels.c[i]); + mlx5e_stats_grp_sw_update_stats_rq_stats(s, &channel_stats->rq); mlx5e_stats_grp_sw_update_stats_xdpsq(s, &channel_stats->rq_xdpsq); mlx5e_stats_grp_sw_update_stats_ch_stats(s, &channel_stats->ch); @@ -1887,6 +1950,19 @@ static const struct counter_desc rq_stats_desc[] = { { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, congst_umr) }, { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, arfs_err) }, { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, recover) }, +#ifdef CONFIG_PAGE_POOL_STATS + { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, page_pool_fast) }, + { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, page_pool_slow) }, + { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, page_pool_slow_high_order) }, + { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, page_pool_empty) }, + { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, page_pool_refill) }, + { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, page_pool_waive) }, + { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, page_pool_rec_cached) }, + { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, page_pool_rec_cache_full) }, + { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, page_pool_rec_ring) }, + { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, page_pool_rec_ring_full) }, + { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, page_pool_rec_released_ref) }, +#endif #ifdef CONFIG_MLX5_EN_TLS { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, tls_decrypted_packets) }, { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, tls_decrypted_bytes) }, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h index 14eaf92..9f66425 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h @@ -205,7 +205,19 @@ struct mlx5e_sw_stats { u64 ch_aff_change; u64 ch_force_irq; u64 ch_eq_rearm; - +#ifdef CONFIG_PAGE_POOL_STATS + u64 rx_page_pool_fast; + u64 rx_page_pool_slow; + u64 rx_page_pool_slow_high_order; + u64 rx_page_pool_empty; + u64 rx_page_pool_refill; + u64 rx_page_pool_waive; + u64 rx_page_pool_rec_cached; + u64 rx_page_pool_rec_cache_full; + u64 rx_page_pool_rec_ring; + u64 rx_page_pool_rec_ring_full; + u64 rx_page_pool_rec_released_ref; +#endif #ifdef CONFIG_MLX5_EN_TLS u64 tx_tls_encrypted_packets; u64 tx_tls_encrypted_bytes; @@ -352,6 +364,19 @@ struct mlx5e_rq_stats { u64 congst_umr; u64 arfs_err; u64 recover; +#ifdef CONFIG_PAGE_POOL_STATS + u64 page_pool_fast; + u64 page_pool_slow; + u64 page_pool_slow_high_order; + u64 page_pool_empty; + u64 page_pool_refill; + u64 page_pool_waive; + u64 page_pool_rec_cached; + u64 page_pool_rec_cache_full; + u64 page_pool_rec_ring; + u64 page_pool_rec_ring_full; + u64 page_pool_rec_released_ref; +#endif #ifdef CONFIG_MLX5_EN_TLS u64 tls_decrypted_packets; u64 tls_decrypted_bytes;