diff mbox series

[RFC,net-next/mm,V1,1/3] page_pool: Remove workqueue in new shutdown scheme

Message ID 168244293875.1741095.10502498932946558516.stgit@firesoul (mailing list archive)
State New
Headers show
Series page_pool: new approach for leak detection and shutdown phase | expand

Commit Message

Jesper Dangaard Brouer April 25, 2023, 5:15 p.m. UTC
This removes the workqueue scheme that periodically tests when
inflight reach zero such that page_pool memory can be freed.

This change adds code to fast-path free checking for a shutdown flags
bit after returning PP pages.

Performance is very important for PP, as the fast path is used for
XDP_DROP use-cases where NIC drivers recycle PP pages directly into PP
alloc cache.

The goal were that this code change should have zero impact on this
fast-path. The slight code reorg of likely() are deliberate. Micro
benchmarking done via kernel module[1] on x86_64, shows this code
change only cost a single instruction extra (approx 0.3 nanosec on CPU
E5-1650 @3.60GHz).

It is possible to make this code zero impact via static_key, but that
change is split out into the next patch, as we are unsure if it is
worth the complexity.

[1] https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/lib/bench_page_pool_simple.c

Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
---
 include/net/page_pool.h |    9 +++----
 net/core/page_pool.c    |   59 +++++++++++++++++++----------------------------
 2 files changed, 28 insertions(+), 40 deletions(-)

Comments

Yunsheng Lin April 27, 2023, 12:57 a.m. UTC | #1
On 2023/4/26 1:15, Jesper Dangaard Brouer wrote:
> @@ -609,6 +609,8 @@ void page_pool_put_defragged_page(struct page_pool *pool, struct page *page,
>  		recycle_stat_inc(pool, ring_full);
>  		page_pool_return_page(pool, page);
>  	}
> +	if (pool->p.flags & PP_FLAG_SHUTDOWN)
> +		page_pool_shutdown_attempt(pool);

It seems we have allowed page_pool_shutdown_attempt() to be called
concurrently here, isn't there a time window between atomic_inc_return_relaxed()
and page_pool_inflight() for pool->pages_state_release_cnt, which may cause
double calling of page_pool_free()?
Jesper Dangaard Brouer April 27, 2023, 10:47 a.m. UTC | #2
On 27/04/2023 02.57, Yunsheng Lin wrote:
> On 2023/4/26 1:15, Jesper Dangaard Brouer wrote:
>> @@ -609,6 +609,8 @@ void page_pool_put_defragged_page(struct page_pool *pool, struct page *page,
>>   		recycle_stat_inc(pool, ring_full);
>>   		page_pool_return_page(pool, page);
>>   	}
>> +	if (pool->p.flags & PP_FLAG_SHUTDOWN)
>> +		page_pool_shutdown_attempt(pool);
> 
> It seems we have allowed page_pool_shutdown_attempt() to be called
> concurrently here, isn't there a time window between atomic_inc_return_relaxed()
> and page_pool_inflight() for pool->pages_state_release_cnt, which may cause
> double calling of page_pool_free()?
> 

Yes, I think that is correct.
I actually woke up this morning thinking of this case of double freeing,
and this time window.  Thanks for spotting and confirming this issue.

Basically: Two concurrent CPUs executing page_pool_shutdown_attempt() 
can both end-up seeing inflight equal zero, resulting in both of them 
kfreeing the memory (in page_pool_free()) as they both think they are 
the last user of PP instance.

I've been thinking how to address this.
This is my current idea:

(1) Atomic variable inc and test (or cmpxchg) that resolves last user race.
(2) Defer free to call_rcu callback to let other CPUs finish.
(3) Might need rcu_read_lock() in page_pool_shutdown_attempt().

--Jesper
Jesper Dangaard Brouer April 27, 2023, 6:29 p.m. UTC | #3
On 27/04/2023 12.47, Jesper Dangaard Brouer wrote:
> 
> On 27/04/2023 02.57, Yunsheng Lin wrote:
>> On 2023/4/26 1:15, Jesper Dangaard Brouer wrote:
>>> @@ -609,6 +609,8 @@ void page_pool_put_defragged_page(struct 
>>> page_pool *pool, struct page *page,
>>>           recycle_stat_inc(pool, ring_full);
>>>           page_pool_return_page(pool, page);
>>>       }
>>> +    if (pool->p.flags & PP_FLAG_SHUTDOWN)
>>> +        page_pool_shutdown_attempt(pool);
>>
>> It seems we have allowed page_pool_shutdown_attempt() to be called
>> concurrently here, isn't there a time window between 
>> atomic_inc_return_relaxed()
>> and page_pool_inflight() for pool->pages_state_release_cnt, which may 
>> cause
>> double calling of page_pool_free()?
>>
> 
> Yes, I think that is correct.
> I actually woke up this morning thinking of this case of double freeing,
> and this time window.  Thanks for spotting and confirming this issue.
> 
> Basically: Two concurrent CPUs executing page_pool_shutdown_attempt() 
> can both end-up seeing inflight equal zero, resulting in both of them 
> kfreeing the memory (in page_pool_free()) as they both think they are 
> the last user of PP instance.
> 
> I've been thinking how to address this.
> This is my current idea:
> 
> (1) Atomic variable inc and test (or cmpxchg) that resolves last user race.
> (2) Defer free to call_rcu callback to let other CPUs finish.
> (3) Might need rcu_read_lock() in page_pool_shutdown_attempt().
> 

I think I found a more simply approach (adjustment patch attached).
That avoids races and any call_rcu callbacks.

Will post a V2.

--Jesper
fix race

From: Jesper Dangaard Brouer <brouer@redhat.com>

Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
---
 net/core/page_pool.c |   48 ++++++++++++++++++++++++++++++++++--------------
 1 file changed, 34 insertions(+), 14 deletions(-)

diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index ce7e8dda6403..25139b162674 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -451,9 +451,8 @@ EXPORT_SYMBOL(page_pool_alloc_pages);
  */
 #define _distance(a, b)	(s32)((a) - (b))
 
-static s32 page_pool_inflight(struct page_pool *pool)
+static s32 __page_pool_inflight(struct page_pool *pool, u32 release_cnt)
 {
-	u32 release_cnt = atomic_read(&pool->pages_state_release_cnt);
 	u32 hold_cnt = READ_ONCE(pool->pages_state_hold_cnt);
 	s32 inflight;
 
@@ -465,6 +464,14 @@ static s32 page_pool_inflight(struct page_pool *pool)
 	return inflight;
 }
 
+static s32 page_pool_inflight(struct page_pool *pool)
+{
+	u32 release_cnt = atomic_read(&pool->pages_state_release_cnt);
+	return __page_pool_inflight(pool, release_cnt);
+}
+
+static int page_pool_free_attempt(struct page_pool *pool, u32 release_cnt);
+
 /* Disconnects a page (from a page_pool).  API users can have a need
  * to disconnect a page (from a page_pool), to allow it to be used as
  * a regular page (that will eventually be returned to the normal
@@ -473,7 +480,7 @@ static s32 page_pool_inflight(struct page_pool *pool)
 void page_pool_release_page(struct page_pool *pool, struct page *page)
 {
 	dma_addr_t dma;
-	int count;
+	u32 count;
 
 	if (!(pool->p.flags & PP_FLAG_DMA_MAP))
 		/* Always account for inflight pages, even if we didn't
@@ -490,8 +497,12 @@ void page_pool_release_page(struct page_pool *pool, struct page *page)
 	page_pool_set_dma_addr(page, 0);
 skip_dma_unmap:
 	page_pool_clear_pp_info(page);
-	count = atomic_inc_return_relaxed(&pool->pages_state_release_cnt);
+	count = atomic_inc_return(&pool->pages_state_release_cnt);
 	trace_page_pool_state_release(pool, page, count);
+
+	/* In shutdown phase, last page will free pool instance */
+	if (pool->p.flags & PP_FLAG_SHUTDOWN)
+		page_pool_free_attempt(pool, count);
 }
 EXPORT_SYMBOL(page_pool_release_page);
 
@@ -543,7 +554,7 @@ static bool page_pool_recycle_in_cache(struct page *page,
 	return true;
 }
 
-static void page_pool_shutdown_attempt(struct page_pool *pool);
+static void page_pool_empty_ring(struct page_pool *pool);
 
 /* If the page refcnt == 1, this will try to recycle the page.
  * if PP_FLAG_DMA_SYNC_DEV is set, we'll try to sync the DMA area for
@@ -610,7 +621,7 @@ void page_pool_put_defragged_page(struct page_pool *pool, struct page *page,
 		page_pool_return_page(pool, page);
 	}
 	if (pool->p.flags & PP_FLAG_SHUTDOWN)
-		page_pool_shutdown_attempt(pool);
+		page_pool_empty_ring(pool);
 }
 EXPORT_SYMBOL(page_pool_put_defragged_page);
 
@@ -660,7 +671,7 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data,
 
 out:
 	if (pool->p.flags & PP_FLAG_SHUTDOWN)
-		page_pool_shutdown_attempt(pool);
+		page_pool_empty_ring(pool);
 }
 EXPORT_SYMBOL(page_pool_put_page_bulk);
 
@@ -743,6 +754,7 @@ struct page *page_pool_alloc_frag(struct page_pool *pool,
 }
 EXPORT_SYMBOL(page_pool_alloc_frag);
 
+noinline
 static void page_pool_empty_ring(struct page_pool *pool)
 {
 	struct page *page;
@@ -802,22 +814,28 @@ static void page_pool_scrub(struct page_pool *pool)
 	page_pool_empty_ring(pool);
 }
 
-static int page_pool_release(struct page_pool *pool)
+noinline
+static int page_pool_free_attempt(struct page_pool *pool, u32 release_cnt)
 {
 	int inflight;
 
-	page_pool_scrub(pool);
-	inflight = page_pool_inflight(pool);
+	inflight = __page_pool_inflight(pool, release_cnt);
 	if (!inflight)
 		page_pool_free(pool);
 
 	return inflight;
 }
 
-noinline
-static void page_pool_shutdown_attempt(struct page_pool *pool)
+static int page_pool_release(struct page_pool *pool)
 {
-	page_pool_release(pool);
+	int inflight;
+
+	page_pool_scrub(pool);
+	inflight = page_pool_inflight(pool);
+	if (!inflight)
+		page_pool_free(pool);
+
+	return inflight;
 }
 
 void page_pool_use_xdp_mem(struct page_pool *pool, void (*disconnect)(void *),
@@ -861,7 +879,9 @@ void page_pool_destroy(struct page_pool *pool)
 	 * Enter into shutdown phase, and retry release to handle races.
 	 */
 	pool->p.flags |= PP_FLAG_SHUTDOWN;
-	page_pool_shutdown_attempt(pool);
+
+	/* Concurrent CPUs could have returned last pages into ptr_ring */
+	page_pool_empty_ring(pool);
 }
 EXPORT_SYMBOL(page_pool_destroy);
diff mbox series

Patch

diff --git a/include/net/page_pool.h b/include/net/page_pool.h
index c8ec2f34722b..a71c0f2695b0 100644
--- a/include/net/page_pool.h
+++ b/include/net/page_pool.h
@@ -50,6 +50,9 @@ 
 				 PP_FLAG_DMA_SYNC_DEV |\
 				 PP_FLAG_PAGE_FRAG)
 
+/* Internal flag: PP in shutdown phase, waiting for inflight pages */
+#define PP_FLAG_SHUTDOWN	BIT(8)
+
 /*
  * Fast allocation side cache array/stack
  *
@@ -151,11 +154,6 @@  static inline u64 *page_pool_ethtool_stats_get(u64 *data, void *stats)
 struct page_pool {
 	struct page_pool_params p;
 
-	struct delayed_work release_dw;
-	void (*disconnect)(void *);
-	unsigned long defer_start;
-	unsigned long defer_warn;
-
 	u32 pages_state_hold_cnt;
 	unsigned int frag_offset;
 	struct page *frag_page;
@@ -165,6 +163,7 @@  struct page_pool {
 	/* these stats are incremented while in softirq context */
 	struct page_pool_alloc_stats alloc_stats;
 #endif
+	void (*disconnect)(void *);
 	u32 xdp_mem_id;
 
 	/*
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index e212e9d7edcb..ce7e8dda6403 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -23,9 +23,6 @@ 
 
 #include <trace/events/page_pool.h>
 
-#define DEFER_TIME (msecs_to_jiffies(1000))
-#define DEFER_WARN_INTERVAL (60 * HZ)
-
 #define BIAS_MAX	LONG_MAX
 
 #ifdef CONFIG_PAGE_POOL_STATS
@@ -380,6 +377,10 @@  static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
 	struct page *page;
 	int i, nr_pages;
 
+	/* API usage BUG: PP in shutdown phase, cannot alloc new pages */
+	if (WARN_ON(pool->p.flags & PP_FLAG_SHUTDOWN))
+		return NULL;
+
 	/* Don't support bulk alloc for high-order pages */
 	if (unlikely(pp_order))
 		return __page_pool_alloc_page_order(pool, gfp);
@@ -489,10 +490,6 @@  void page_pool_release_page(struct page_pool *pool, struct page *page)
 	page_pool_set_dma_addr(page, 0);
 skip_dma_unmap:
 	page_pool_clear_pp_info(page);
-
-	/* This may be the last page returned, releasing the pool, so
-	 * it is not safe to reference pool afterwards.
-	 */
 	count = atomic_inc_return_relaxed(&pool->pages_state_release_cnt);
 	trace_page_pool_state_release(pool, page, count);
 }
@@ -535,7 +532,7 @@  static bool page_pool_recycle_in_ring(struct page_pool *pool, struct page *page)
 static bool page_pool_recycle_in_cache(struct page *page,
 				       struct page_pool *pool)
 {
-	if (unlikely(pool->alloc.count == PP_ALLOC_CACHE_SIZE)) {
+	if (pool->alloc.count == PP_ALLOC_CACHE_SIZE) {
 		recycle_stat_inc(pool, cache_full);
 		return false;
 	}
@@ -546,6 +543,8 @@  static bool page_pool_recycle_in_cache(struct page *page,
 	return true;
 }
 
+static void page_pool_shutdown_attempt(struct page_pool *pool);
+
 /* If the page refcnt == 1, this will try to recycle the page.
  * if PP_FLAG_DMA_SYNC_DEV is set, we'll try to sync the DMA area for
  * the configured size min(dma_sync_size, pool->max_len).
@@ -572,7 +571,8 @@  __page_pool_put_page(struct page_pool *pool, struct page *page,
 			page_pool_dma_sync_for_device(pool, page,
 						      dma_sync_size);
 
-		if (allow_direct && in_softirq() &&
+		/* During PP shutdown, no direct recycle must occur */
+		if (likely(allow_direct && in_softirq()) &&
 		    page_pool_recycle_in_cache(page, pool))
 			return NULL;
 
@@ -609,6 +609,8 @@  void page_pool_put_defragged_page(struct page_pool *pool, struct page *page,
 		recycle_stat_inc(pool, ring_full);
 		page_pool_return_page(pool, page);
 	}
+	if (pool->p.flags & PP_FLAG_SHUTDOWN)
+		page_pool_shutdown_attempt(pool);
 }
 EXPORT_SYMBOL(page_pool_put_defragged_page);
 
@@ -648,13 +650,17 @@  void page_pool_put_page_bulk(struct page_pool *pool, void **data,
 
 	/* Hopefully all pages was return into ptr_ring */
 	if (likely(i == bulk_len))
-		return;
+		goto out;
 
 	/* ptr_ring cache full, free remaining pages outside producer lock
 	 * since put_page() with refcnt == 1 can be an expensive operation
 	 */
 	for (; i < bulk_len; i++)
 		page_pool_return_page(pool, data[i]);
+
+out:
+	if (pool->p.flags & PP_FLAG_SHUTDOWN)
+		page_pool_shutdown_attempt(pool);
 }
 EXPORT_SYMBOL(page_pool_put_page_bulk);
 
@@ -808,27 +814,10 @@  static int page_pool_release(struct page_pool *pool)
 	return inflight;
 }
 
-static void page_pool_release_retry(struct work_struct *wq)
+noinline
+static void page_pool_shutdown_attempt(struct page_pool *pool)
 {
-	struct delayed_work *dwq = to_delayed_work(wq);
-	struct page_pool *pool = container_of(dwq, typeof(*pool), release_dw);
-	int inflight;
-
-	inflight = page_pool_release(pool);
-	if (!inflight)
-		return;
-
-	/* Periodic warning */
-	if (time_after_eq(jiffies, pool->defer_warn)) {
-		int sec = (s32)((u32)jiffies - (u32)pool->defer_start) / HZ;
-
-		pr_warn("%s() stalled pool shutdown %d inflight %d sec\n",
-			__func__, inflight, sec);
-		pool->defer_warn = jiffies + DEFER_WARN_INTERVAL;
-	}
-
-	/* Still not ready to be disconnected, retry later */
-	schedule_delayed_work(&pool->release_dw, DEFER_TIME);
+	page_pool_release(pool);
 }
 
 void page_pool_use_xdp_mem(struct page_pool *pool, void (*disconnect)(void *),
@@ -868,11 +857,11 @@  void page_pool_destroy(struct page_pool *pool)
 	if (!page_pool_release(pool))
 		return;
 
-	pool->defer_start = jiffies;
-	pool->defer_warn  = jiffies + DEFER_WARN_INTERVAL;
-
-	INIT_DELAYED_WORK(&pool->release_dw, page_pool_release_retry);
-	schedule_delayed_work(&pool->release_dw, DEFER_TIME);
+	/* PP have pages inflight, thus cannot immediately release memory.
+	 * Enter into shutdown phase, and retry release to handle races.
+	 */
+	pool->p.flags |= PP_FLAG_SHUTDOWN;
+	page_pool_shutdown_attempt(pool);
 }
 EXPORT_SYMBOL(page_pool_destroy);