Message ID | 1643933373-6590-11-git-send-email-jdamato@fastly.com (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | page_pool: Add page_pool stat counters | expand |
Hi Joe, On Thu, Feb 03, 2022 at 04:09:32PM -0800, Joe Damato wrote: > Track how often pages obtained from the ring cannot be added to the cache > because of a NUMA mismatch. > > Signed-off-by: Joe Damato <jdamato@fastly.com> > --- > include/net/page_pool.h | 1 + > net/core/page_pool.c | 1 + > 2 files changed, 2 insertions(+) > > diff --git a/include/net/page_pool.h b/include/net/page_pool.h > index 65cd0ca..bb87706 100644 > --- a/include/net/page_pool.h > +++ b/include/net/page_pool.h > @@ -150,6 +150,7 @@ struct page_pool_stats { > * slow path allocation > */ > u64 refill; /* allocations via successful refill */ > + u64 waive; /* failed refills due to numa zone mismatch */ > } alloc; > }; > #endif > diff --git a/net/core/page_pool.c b/net/core/page_pool.c > index 4fe48ec..0bd084c 100644 > --- a/net/core/page_pool.c > +++ b/net/core/page_pool.c > @@ -166,6 +166,7 @@ static struct page *page_pool_refill_alloc_cache(struct page_pool *pool) > * This limit stress on page buddy alloactor. > */ > page_pool_return_page(pool, page); > + this_cpu_inc_alloc_stat(pool, waive); > page = NULL; > break; > } > -- > 2.7.4 > Personally i'd find it easier to read if patches 1-10 were squashed in a single commit. Regards /Ilias
On Thu, Feb 3, 2022 at 11:44 PM Ilias Apalodimas <ilias.apalodimas@linaro.org> wrote: > > Hi Joe, > > On Thu, Feb 03, 2022 at 04:09:32PM -0800, Joe Damato wrote: > > Track how often pages obtained from the ring cannot be added to the cache > > because of a NUMA mismatch. > > > > Signed-off-by: Joe Damato <jdamato@fastly.com> > > --- > > include/net/page_pool.h | 1 + > > net/core/page_pool.c | 1 + > > 2 files changed, 2 insertions(+) > > > > diff --git a/include/net/page_pool.h b/include/net/page_pool.h > > index 65cd0ca..bb87706 100644 > > --- a/include/net/page_pool.h > > +++ b/include/net/page_pool.h > > @@ -150,6 +150,7 @@ struct page_pool_stats { > > * slow path allocation > > */ > > u64 refill; /* allocations via successful refill */ > > + u64 waive; /* failed refills due to numa zone mismatch */ > > } alloc; > > }; > > #endif > > diff --git a/net/core/page_pool.c b/net/core/page_pool.c > > index 4fe48ec..0bd084c 100644 > > --- a/net/core/page_pool.c > > +++ b/net/core/page_pool.c > > @@ -166,6 +166,7 @@ static struct page *page_pool_refill_alloc_cache(struct page_pool *pool) > > * This limit stress on page buddy alloactor. > > */ > > page_pool_return_page(pool, page); > > + this_cpu_inc_alloc_stat(pool, waive); > > page = NULL; > > break; > > } > > -- > > 2.7.4 > > > > Personally i'd find it easier to read if patches 1-10 were squashed in a > single commit. Thanks for the feedback. I've squashed patches 1-10 to a single commit in my v5 branch.
diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 65cd0ca..bb87706 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -150,6 +150,7 @@ struct page_pool_stats { * slow path allocation */ u64 refill; /* allocations via successful refill */ + u64 waive; /* failed refills due to numa zone mismatch */ } alloc; }; #endif diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 4fe48ec..0bd084c 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -166,6 +166,7 @@ static struct page *page_pool_refill_alloc_cache(struct page_pool *pool) * This limit stress on page buddy alloactor. */ page_pool_return_page(pool, page); + this_cpu_inc_alloc_stat(pool, waive); page = NULL; break; }
Track how often pages obtained from the ring cannot be added to the cache because of a NUMA mismatch. Signed-off-by: Joe Damato <jdamato@fastly.com> --- include/net/page_pool.h | 1 + net/core/page_pool.c | 1 + 2 files changed, 2 insertions(+)