diff mbox series

[RFC,v4,3/3] page_pool: skip dma sync operation for inflight pages

Message ID 20241120103456.396577-4-linyunsheng@huawei.com (mailing list archive)
State RFC
Delegated to: Netdev Maintainers
Headers show
Series [RFC,v4,1/3] page_pool: fix timing for checking and disabling napi_local | expand

Checks

Context Check Description
netdev/series_format warning Series does not have a cover letter; Target tree name not specified in the subject
netdev/tree_selection success Guessed tree name to be net-next
netdev/ynl success Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 3 this patch: 3
netdev/build_tools success No tools touched, skip
netdev/cc_maintainers success CCed 7 of 7 maintainers
netdev/build_clang success Errors and warnings before: 3 this patch: 3
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success Fixes tag looks correct
netdev/build_allmodconfig_warn success Errors and warnings before: 4 this patch: 4
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 72 lines checked
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 3 this patch: 3
netdev/source_inline success Was 0 now: 0

Commit Message

Yunsheng Lin Nov. 20, 2024, 10:34 a.m. UTC
Skip dma sync operation for inflight pages before the
page_pool_destroy() returns to the driver as DMA API
expects to be called with a valid device bound to a
driver as mentioned in [1].

After page_pool_destroy() is called, the page is not
expected to be recycled back to pool->alloc cache and
dma sync operation is not needed when the page is not
recyclable or pool->ring is full, so only skip the dma
sync operation for the infilght pages by clearing the
pool->dma_sync under protection of rcu lock when page
is recycled to pool->ring to ensure that there is no
dma sync operation called after page_pool_destroy() is
returned.

1. https://lore.kernel.org/all/caf31b5e-0e8f-4844-b7ba-ef59ed13b74e@arm.com/
CC: Robin Murphy <robin.murphy@arm.com>
CC: Alexander Duyck <alexander.duyck@gmail.com>
CC: Andrew Morton <akpm@linux-foundation.org>
CC: IOMMU <iommu@lists.linux.dev>
CC: MM <linux-mm@kvack.org>
Fixes: f71fec47c2df ("page_pool: make sure struct device is stable")
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
---
 net/core/page_pool.c | 25 ++++++++++++++++++-------
 1 file changed, 18 insertions(+), 7 deletions(-)

Comments

Robin Murphy Nov. 20, 2024, 4:17 p.m. UTC | #1
On 20/11/2024 10:34 am, Yunsheng Lin wrote:
> Skip dma sync operation for inflight pages before the
> page_pool_destroy() returns to the driver as DMA API
> expects to be called with a valid device bound to a
> driver as mentioned in [1].
> 
> After page_pool_destroy() is called, the page is not
> expected to be recycled back to pool->alloc cache and
> dma sync operation is not needed when the page is not
> recyclable or pool->ring is full, so only skip the dma
> sync operation for the infilght pages by clearing the
> pool->dma_sync under protection of rcu lock when page
> is recycled to pool->ring to ensure that there is no
> dma sync operation called after page_pool_destroy() is
> returned.

Something feels off here - either this is a micro-optimisation which I 
wouldn't really expect to be meaningful, or it means patch #2 doesn't 
actually do what it claims. If it really is possible to attempt to 
dma_sync a page *after* page_pool_inflight_unmap() has already reclaimed 
and unmapped it, that represents yet another DMA API lifecycle issue, 
which as well as being even more obviously incorrect usage-wise, could 
also still lead to the same crash (if the device is non-coherent).

Otherwise, I don't imagine it's really worth worrying about optimising 
out syncs for any pages which happen to get naturally returned after 
page_pool_destroy() starts but before they're explicitly reclaimed. 
Realistically, the kinds of big server systems where reclaim takes an 
appreciable amount of time are going to be coherent and skipping syncs 
anyway.

Thanks,
Robin.

> 1. https://lore.kernel.org/all/caf31b5e-0e8f-4844-b7ba-ef59ed13b74e@arm.com/
> CC: Robin Murphy <robin.murphy@arm.com>
> CC: Alexander Duyck <alexander.duyck@gmail.com>
> CC: Andrew Morton <akpm@linux-foundation.org>
> CC: IOMMU <iommu@lists.linux.dev>
> CC: MM <linux-mm@kvack.org>
> Fixes: f71fec47c2df ("page_pool: make sure struct device is stable")
> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
> ---
>   net/core/page_pool.c | 25 ++++++++++++++++++-------
>   1 file changed, 18 insertions(+), 7 deletions(-)
> 
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index 33a314abbba4..0bde7c6c781a 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -712,7 +712,8 @@ static void page_pool_return_page(struct page_pool *pool, netmem_ref netmem)
>   	rcu_read_unlock();
>   }
>   
> -static bool page_pool_recycle_in_ring(struct page_pool *pool, netmem_ref netmem)
> +static bool page_pool_recycle_in_ring(struct page_pool *pool, netmem_ref netmem,
> +				      unsigned int dma_sync_size)
>   {
>   	int ret;
>   	/* BH protection not needed if current is softirq */
> @@ -723,10 +724,13 @@ static bool page_pool_recycle_in_ring(struct page_pool *pool, netmem_ref netmem)
>   
>   	if (!ret) {
>   		recycle_stat_inc(pool, ring);
> -		return true;
> +
> +		rcu_read_lock();
> +		page_pool_dma_sync_for_device(pool, netmem, dma_sync_size);
> +		rcu_read_unlock();
>   	}
>   
> -	return false;
> +	return !ret;
>   }
>   
>   /* Only allow direct recycling in special circumstances, into the
> @@ -779,10 +783,11 @@ __page_pool_put_page(struct page_pool *pool, netmem_ref netmem,
>   	if (likely(__page_pool_page_can_be_recycled(netmem))) {
>   		/* Read barrier done in page_ref_count / READ_ONCE */
>   
> -		page_pool_dma_sync_for_device(pool, netmem, dma_sync_size);
> -
> -		if (allow_direct && page_pool_recycle_in_cache(netmem, pool))
> +		if (allow_direct && page_pool_recycle_in_cache(netmem, pool)) {
> +			page_pool_dma_sync_for_device(pool, netmem,
> +						      dma_sync_size);
>   			return 0;
> +		}
>   
>   		/* Page found as candidate for recycling */
>   		return netmem;
> @@ -845,7 +850,7 @@ void page_pool_put_unrefed_netmem(struct page_pool *pool, netmem_ref netmem,
>   
>   	netmem =
>   		__page_pool_put_page(pool, netmem, dma_sync_size, allow_direct);
> -	if (netmem && !page_pool_recycle_in_ring(pool, netmem)) {
> +	if (netmem && !page_pool_recycle_in_ring(pool, netmem, dma_sync_size)) {
>   		/* Cache full, fallback to free pages */
>   		recycle_stat_inc(pool, ring_full);
>   		page_pool_return_page(pool, netmem);
> @@ -903,14 +908,18 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data,
>   
>   	/* Bulk producer into ptr_ring page_pool cache */
>   	in_softirq = page_pool_producer_lock(pool);
> +	rcu_read_lock();
>   	for (i = 0; i < bulk_len; i++) {
>   		if (__ptr_ring_produce(&pool->ring, data[i])) {
>   			/* ring full */
>   			recycle_stat_inc(pool, ring_full);
>   			break;
>   		}
> +		page_pool_dma_sync_for_device(pool, (__force netmem_ref)data[i],
> +					      -1);
>   	}
>   	recycle_stat_add(pool, ring, i);
> +	rcu_read_unlock();
>   	page_pool_producer_unlock(pool, in_softirq);
>   
>   	/* Hopefully all pages was return into ptr_ring */
> @@ -1200,6 +1209,8 @@ void page_pool_destroy(struct page_pool *pool)
>   	if (!page_pool_release(pool))
>   		return;
>   
> +	pool->dma_sync = false;
> +
>   	/* Paired with rcu lock in page_pool_napi_local() to enable clearing
>   	 * of pool->p.napi in page_pool_disable_direct_recycling() is seen
>   	 * before returning to driver to free the napi instance.
Yunsheng Lin Nov. 21, 2024, 8:04 a.m. UTC | #2
On 2024/11/21 0:17, Robin Murphy wrote:
> On 20/11/2024 10:34 am, Yunsheng Lin wrote:
>> Skip dma sync operation for inflight pages before the
>> page_pool_destroy() returns to the driver as DMA API
>> expects to be called with a valid device bound to a
>> driver as mentioned in [1].
>>
>> After page_pool_destroy() is called, the page is not
>> expected to be recycled back to pool->alloc cache and
>> dma sync operation is not needed when the page is not
>> recyclable or pool->ring is full, so only skip the dma
>> sync operation for the infilght pages by clearing the
>> pool->dma_sync under protection of rcu lock when page
>> is recycled to pool->ring to ensure that there is no
>> dma sync operation called after page_pool_destroy() is
>> returned.
> 
> Something feels off here - either this is a micro-optimisation which I wouldn't really expect to be meaningful, or it means patch #2 doesn't actually do what it claims. If it really is possible to attempt to dma_sync a page *after* page_pool_inflight_unmap() has already reclaimed and unmapped it, that represents yet another DMA API lifecycle issue, which as well as being even more obviously incorrect usage-wise, could also still lead to the same crash (if the device is non-coherent).

For a page_pool owned page, it mostly goes through the below steps:
1. page_pool calls buddy allocator API to allocate a page, call DMA mapping
   and sync_for_device API for it if its pool is empty. Or reuse the page in
   pool.

2. Driver calls the page_pool API to allocate the page, and pass the page
   to network stack after packet is dma'ed into the page and the sync_for_cpu
   API is called.

3. Network stack is done with page and called page_pool API to free the page.

4. page_pool releases the page back to buddy allocator if the page is not
   recyclable before doing the dma unmaping. Or do the sync_for_device
   and put the page in the its pool, the page might go through step 1
   again if the driver calls the page_pool allocate API.

The calling of dma mapping and dma sync API is controlled by pool->dma_map
and pool->dma_sync respectively, the previous patch only clear pool->dma_map
after doing the dma unmapping. This patch ensures that there is no dma_sync
for recycle case of step 4 by clearing pool->dma_sync.

The dma_sync skipping should also happen before page_pool_inflight_unmap()
is called too because all the caller will see the clearing of pool->dma_sync
after synchronize_rcu() and page_pool_inflight_unmap() is called after
the same synchronize_rcu() in page_pool_destroy().

> 
> Otherwise, I don't imagine it's really worth worrying about optimising out syncs for any pages which happen to get naturally returned after page_pool_destroy() starts but before they're explicitly reclaimed. Realistically, the kinds of big server systems where reclaim takes an appreciable amount of time are going to be coherent and skipping syncs anyway.

The skipping is about skipping the dma sync for those inflight pages,
I should make it clearer that the skipping happens before the calling
of page_pool_inflight_unmap() instead of page_pool_destroy() in the
commit log.

>
Robin Murphy Nov. 21, 2024, 1:44 p.m. UTC | #3
On 21/11/2024 8:04 am, Yunsheng Lin wrote:
> On 2024/11/21 0:17, Robin Murphy wrote:
>> On 20/11/2024 10:34 am, Yunsheng Lin wrote:
>>> Skip dma sync operation for inflight pages before the
>>> page_pool_destroy() returns to the driver as DMA API
>>> expects to be called with a valid device bound to a
>>> driver as mentioned in [1].
>>>
>>> After page_pool_destroy() is called, the page is not
>>> expected to be recycled back to pool->alloc cache and
>>> dma sync operation is not needed when the page is not
>>> recyclable or pool->ring is full, so only skip the dma
>>> sync operation for the infilght pages by clearing the
>>> pool->dma_sync under protection of rcu lock when page
>>> is recycled to pool->ring to ensure that there is no
>>> dma sync operation called after page_pool_destroy() is
>>> returned.
>>
>> Something feels off here - either this is a micro-optimisation which I wouldn't really expect to be meaningful, or it means patch #2 doesn't actually do what it claims. If it really is possible to attempt to dma_sync a page *after* page_pool_inflight_unmap() has already reclaimed and unmapped it, that represents yet another DMA API lifecycle issue, which as well as being even more obviously incorrect usage-wise, could also still lead to the same crash (if the device is non-coherent).
> 
> For a page_pool owned page, it mostly goes through the below steps:
> 1. page_pool calls buddy allocator API to allocate a page, call DMA mapping
>     and sync_for_device API for it if its pool is empty. Or reuse the page in
>     pool.
> 
> 2. Driver calls the page_pool API to allocate the page, and pass the page
>     to network stack after packet is dma'ed into the page and the sync_for_cpu
>     API is called.
> 
> 3. Network stack is done with page and called page_pool API to free the page.
> 
> 4. page_pool releases the page back to buddy allocator if the page is not
>     recyclable before doing the dma unmaping. Or do the sync_for_device
>     and put the page in the its pool, the page might go through step 1
>     again if the driver calls the page_pool allocate API.
> 
> The calling of dma mapping and dma sync API is controlled by pool->dma_map
> and pool->dma_sync respectively, the previous patch only clear pool->dma_map
> after doing the dma unmapping. This patch ensures that there is no dma_sync
> for recycle case of step 4 by clearing pool->dma_sync.

But *why* does it want to ensure that? Is there some possible race where 
one thread can attempt to sync and recycle a page while another thread 
is attempting to unmap and free it, such that you can't guarantee the 
correctness of dma_sync calls after page_pool_inflight_unmap() has 
started, and skipping them is a workaround for that? If so, then frankly 
I think that would want solving properly, but at the very least this 
change would need to come before patch #2.

If not, and this is just some attempt at performance micro-optimisation, 
then I'd be keen to see the numbers to justify it, since I struggle to 
imagine it being worth the bother while already in the process of 
spending whole seconds scanning memory...

Thanks,
Robin.

> The dma_sync skipping should also happen before page_pool_inflight_unmap()
> is called too because all the caller will see the clearing of pool->dma_sync
> after synchronize_rcu() and page_pool_inflight_unmap() is called after
> the same synchronize_rcu() in page_pool_destroy().
> 
>>
>> Otherwise, I don't imagine it's really worth worrying about optimising out syncs for any pages which happen to get naturally returned after page_pool_destroy() starts but before they're explicitly reclaimed. Realistically, the kinds of big server systems where reclaim takes an appreciable amount of time are going to be coherent and skipping syncs anyway.
> 
> The skipping is about skipping the dma sync for those inflight pages,
> I should make it clearer that the skipping happens before the calling
> of page_pool_inflight_unmap() instead of page_pool_destroy() in the
> commit log.
> 
>>
>
diff mbox series

Patch

diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 33a314abbba4..0bde7c6c781a 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -712,7 +712,8 @@  static void page_pool_return_page(struct page_pool *pool, netmem_ref netmem)
 	rcu_read_unlock();
 }
 
-static bool page_pool_recycle_in_ring(struct page_pool *pool, netmem_ref netmem)
+static bool page_pool_recycle_in_ring(struct page_pool *pool, netmem_ref netmem,
+				      unsigned int dma_sync_size)
 {
 	int ret;
 	/* BH protection not needed if current is softirq */
@@ -723,10 +724,13 @@  static bool page_pool_recycle_in_ring(struct page_pool *pool, netmem_ref netmem)
 
 	if (!ret) {
 		recycle_stat_inc(pool, ring);
-		return true;
+
+		rcu_read_lock();
+		page_pool_dma_sync_for_device(pool, netmem, dma_sync_size);
+		rcu_read_unlock();
 	}
 
-	return false;
+	return !ret;
 }
 
 /* Only allow direct recycling in special circumstances, into the
@@ -779,10 +783,11 @@  __page_pool_put_page(struct page_pool *pool, netmem_ref netmem,
 	if (likely(__page_pool_page_can_be_recycled(netmem))) {
 		/* Read barrier done in page_ref_count / READ_ONCE */
 
-		page_pool_dma_sync_for_device(pool, netmem, dma_sync_size);
-
-		if (allow_direct && page_pool_recycle_in_cache(netmem, pool))
+		if (allow_direct && page_pool_recycle_in_cache(netmem, pool)) {
+			page_pool_dma_sync_for_device(pool, netmem,
+						      dma_sync_size);
 			return 0;
+		}
 
 		/* Page found as candidate for recycling */
 		return netmem;
@@ -845,7 +850,7 @@  void page_pool_put_unrefed_netmem(struct page_pool *pool, netmem_ref netmem,
 
 	netmem =
 		__page_pool_put_page(pool, netmem, dma_sync_size, allow_direct);
-	if (netmem && !page_pool_recycle_in_ring(pool, netmem)) {
+	if (netmem && !page_pool_recycle_in_ring(pool, netmem, dma_sync_size)) {
 		/* Cache full, fallback to free pages */
 		recycle_stat_inc(pool, ring_full);
 		page_pool_return_page(pool, netmem);
@@ -903,14 +908,18 @@  void page_pool_put_page_bulk(struct page_pool *pool, void **data,
 
 	/* Bulk producer into ptr_ring page_pool cache */
 	in_softirq = page_pool_producer_lock(pool);
+	rcu_read_lock();
 	for (i = 0; i < bulk_len; i++) {
 		if (__ptr_ring_produce(&pool->ring, data[i])) {
 			/* ring full */
 			recycle_stat_inc(pool, ring_full);
 			break;
 		}
+		page_pool_dma_sync_for_device(pool, (__force netmem_ref)data[i],
+					      -1);
 	}
 	recycle_stat_add(pool, ring, i);
+	rcu_read_unlock();
 	page_pool_producer_unlock(pool, in_softirq);
 
 	/* Hopefully all pages was return into ptr_ring */
@@ -1200,6 +1209,8 @@  void page_pool_destroy(struct page_pool *pool)
 	if (!page_pool_release(pool))
 		return;
 
+	pool->dma_sync = false;
+
 	/* Paired with rcu lock in page_pool_napi_local() to enable clearing
 	 * of pool->p.napi in page_pool_disable_direct_recycling() is seen
 	 * before returning to driver to free the napi instance.