Message ID | 20220629133305.15012-1-huangguangbin2@huawei.com (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | [net-next,v2] net: page_pool: optimize page pool page allocation in NUMA scenario | expand |
On Wed, 29 Jun 2022 21:33:05 +0800 Guangbin Huang wrote: > +#ifdef CONFIG_NUMA > + pref_nid = (pool->p.nid == NUMA_NO_NODE) ? numa_mem_id() : pool->p.nid; > +#else > + /* Ignore pool->p.nid setting if !CONFIG_NUMA */ > + pref_nid = NUMA_NO_NODE; > +#endif Please factor this out to a helper, this is a copy of the code from page_pool_refill_alloc_cache() and #ifdefs are a little yuck.
On 01/07/2022 06.15, Jakub Kicinski wrote: > On Wed, 29 Jun 2022 21:33:05 +0800 Guangbin Huang wrote: >> +#ifdef CONFIG_NUMA >> + pref_nid = (pool->p.nid == NUMA_NO_NODE) ? numa_mem_id() : pool->p.nid; >> +#else >> + /* Ignore pool->p.nid setting if !CONFIG_NUMA */ >> + pref_nid = NUMA_NO_NODE; >> +#endif > > Please factor this out to a helper, this is a copy of the code from > page_pool_refill_alloc_cache() and #ifdefs are a little yuck. > I would say simply use 'pool->p.nid' in the call to alloc_pages_bulk_array_node() and drop this optimization (that was copy-pasted from fast-path). The optimization avoids one reading from memory compile time depending on CONFIG_NUMA. It is *not* worth doing in this code path which is even named "slow" (__page_pool_alloc_pages_slow). --Jesper
On 2022/7/1 15:56, Jesper Dangaard Brouer wrote: > > On 01/07/2022 06.15, Jakub Kicinski wrote: >> On Wed, 29 Jun 2022 21:33:05 +0800 Guangbin Huang wrote: >>> +#ifdef CONFIG_NUMA >>> + pref_nid = (pool->p.nid == NUMA_NO_NODE) ? numa_mem_id() : >>> pool->p.nid; >>> +#else >>> + /* Ignore pool->p.nid setting if !CONFIG_NUMA */ >>> + pref_nid = NUMA_NO_NODE; >>> +#endif >> >> Please factor this out to a helper, this is a copy of the code from >> page_pool_refill_alloc_cache() and #ifdefs are a little yuck. >> > > I would say simply use 'pool->p.nid' in the call to > alloc_pages_bulk_array_node() and drop this optimization (that was > copy-pasted from fast-path). > > The optimization avoids one reading from memory compile time depending > on CONFIG_NUMA. It is *not* worth doing in this code path which is even > named "slow" (__page_pool_alloc_pages_slow). > > --Jesper > Simply use pool->p.nid looks simply and makes sense in both scenario. I will rewrite and test the patch in next version. > > . >
diff --git a/net/core/page_pool.c b/net/core/page_pool.c index f18e6e771993..64cb2c617de8 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -377,6 +377,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, unsigned int pp_order = pool->p.order; struct page *page; int i, nr_pages; + int pref_nid; /* preferred NUMA node */ /* Don't support bulk alloc for high-order pages */ if (unlikely(pp_order)) @@ -386,10 +387,18 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, if (unlikely(pool->alloc.count > 0)) return pool->alloc.cache[--pool->alloc.count]; +#ifdef CONFIG_NUMA + pref_nid = (pool->p.nid == NUMA_NO_NODE) ? numa_mem_id() : pool->p.nid; +#else + /* Ignore pool->p.nid setting if !CONFIG_NUMA */ + pref_nid = NUMA_NO_NODE; +#endif + /* Mark empty alloc.cache slots "empty" for alloc_pages_bulk_array */ memset(&pool->alloc.cache, 0, sizeof(void *) * bulk); - nr_pages = alloc_pages_bulk_array(gfp, bulk, pool->alloc.cache); + nr_pages = alloc_pages_bulk_array_node(gfp, pref_nid, bulk, + pool->alloc.cache); if (unlikely(!nr_pages)) return NULL;