diff mbox series

[4/5] net: page_pool: refactor dma_map into own function page_pool_dma_map

Message ID 20210301161200.18852-5-mgorman@techsingularity.net (mailing list archive)
State Superseded
Delegated to: Netdev Maintainers
Headers show
Series Introduce a bulk order-0 page allocator with two in-tree users | expand

Checks

Context Check Description
netdev/cover_letter success Link
netdev/fixes_present success Link
netdev/patch_count success Link
netdev/tree_selection success Guessed tree name to be net-next
netdev/subject_prefix success Link
netdev/cc_maintainers fail 4 maintainers not CCed: davem@davemloft.net ilias.apalodimas@linaro.org kuba@kernel.org hawk@kernel.org
netdev/source_inline success Was 0 now: 0
netdev/verify_signedoff success Link
netdev/module_param success Was 0 now: 0
netdev/build_32bit success Errors and warnings before: 1 this patch: 1
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/verify_fixes success Link
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 70 lines checked
netdev/build_allmodconfig_warn success Errors and warnings before: 1 this patch: 1
netdev/header_inline success Link
netdev/stable success Stable not CCed

Commit Message

Mel Gorman March 1, 2021, 4:11 p.m. UTC
From: Jesper Dangaard Brouer <brouer@redhat.com>

In preparation for next patch, move the dma mapping into its own
function, as this will make it easier to follow the changes.

V2: make page_pool_dma_map return boolean (Ilias)

Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
---
 net/core/page_pool.c | 45 +++++++++++++++++++++++++-------------------
 1 file changed, 26 insertions(+), 19 deletions(-)

Comments

Ilias Apalodimas March 2, 2021, 6:49 p.m. UTC | #1
Hi Mel,

Can you please CC me in future revisions. I almost missed that!

On Mon, Mar 01, 2021 at 04:11:59PM +0000, Mel Gorman wrote:
> From: Jesper Dangaard Brouer <brouer@redhat.com>
> 
> In preparation for next patch, move the dma mapping into its own
> function, as this will make it easier to follow the changes.
> 
> V2: make page_pool_dma_map return boolean (Ilias)
> 

[...]

>  static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
>  						 gfp_t _gfp)
>  {
> +	unsigned int pp_flags = pool->p.flags;
>  	struct page *page;
>  	gfp_t gfp = _gfp;
> -	dma_addr_t dma;
>  
>  	/* We could always set __GFP_COMP, and avoid this branch, as
>  	 * prep_new_page() can handle order-0 with __GFP_COMP.
> @@ -211,30 +234,14 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
>  	if (!page)
>  		return NULL;
>  
> -	if (!(pool->p.flags & PP_FLAG_DMA_MAP))
> -		goto skip_dma_map;
> -
> -	/* Setup DMA mapping: use 'struct page' area for storing DMA-addr
> -	 * since dma_addr_t can be either 32 or 64 bits and does not always fit
> -	 * into page private data (i.e 32bit cpu with 64bit DMA caps)
> -	 * This mapping is kept for lifetime of page, until leaving pool.
> -	 */
> -	dma = dma_map_page_attrs(pool->p.dev, page, 0,
> -				 (PAGE_SIZE << pool->p.order),
> -				 pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC);
> -	if (dma_mapping_error(pool->p.dev, dma)) {
> +	if (pp_flags & PP_FLAG_DMA_MAP &&

Nit pick but can we have if ((pp_flags & PP_FLAG_DMA_MAP) && ...

> +	    unlikely(!page_pool_dma_map(pool, page))) {
>  		put_page(page);
>  		return NULL;
>  	}
> -	page->dma_addr = dma;
>  
> -	if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
> -		page_pool_dma_sync_for_device(pool, page, pool->p.max_len);
> -
> -skip_dma_map:
>  	/* Track how many pages are held 'in-flight' */
>  	pool->pages_state_hold_cnt++;
> -
>  	trace_page_pool_state_hold(pool, page, pool->pages_state_hold_cnt);
>  
>  	/* When page just alloc'ed is should/must have refcnt 1. */
> -- 
> 2.26.2
> 

Otherwise 
Reviewed-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Mel Gorman March 3, 2021, 9:18 a.m. UTC | #2
On Tue, Mar 02, 2021 at 08:49:06PM +0200, Ilias Apalodimas wrote:
> Hi Mel,
> 
> Can you please CC me in future revisions. I almost missed that!
> 

Will do.

> On Mon, Mar 01, 2021 at 04:11:59PM +0000, Mel Gorman wrote:
> > From: Jesper Dangaard Brouer <brouer@redhat.com>
> > 
> > In preparation for next patch, move the dma mapping into its own
> > function, as this will make it easier to follow the changes.
> > 
> > V2: make page_pool_dma_map return boolean (Ilias)
> > 
> 
> [...]
> 
> > @@ -211,30 +234,14 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
> >  	if (!page)
> >  		return NULL;
> >  
> > -	if (!(pool->p.flags & PP_FLAG_DMA_MAP))
> > -		goto skip_dma_map;
> > -
> > -	/* Setup DMA mapping: use 'struct page' area for storing DMA-addr
> > -	 * since dma_addr_t can be either 32 or 64 bits and does not always fit
> > -	 * into page private data (i.e 32bit cpu with 64bit DMA caps)
> > -	 * This mapping is kept for lifetime of page, until leaving pool.
> > -	 */
> > -	dma = dma_map_page_attrs(pool->p.dev, page, 0,
> > -				 (PAGE_SIZE << pool->p.order),
> > -				 pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC);
> > -	if (dma_mapping_error(pool->p.dev, dma)) {
> > +	if (pp_flags & PP_FLAG_DMA_MAP &&
> 
> Nit pick but can we have if ((pp_flags & PP_FLAG_DMA_MAP) && ...
> 

Done.

> [...]
>
> > 
> 
> Otherwise 
> Reviewed-by: Ilias Apalodimas <ilias.apalodimas@linaro.org> 

Thanks. I'll wait for other review feedback before sending a V2.
Jesper Dangaard Brouer March 3, 2021, 10:19 a.m. UTC | #3
On Wed, 3 Mar 2021 09:18:25 +0000
Mel Gorman <mgorman@techsingularity.net> wrote:
> On Tue, Mar 02, 2021 at 08:49:06PM +0200, Ilias Apalodimas wrote:
> > On Mon, Mar 01, 2021 at 04:11:59PM +0000, Mel Gorman wrote:  
> > > From: Jesper Dangaard Brouer <brouer@redhat.com>
> > > 
> > > In preparation for next patch, move the dma mapping into its own
> > > function, as this will make it easier to follow the changes.
> > > 
> > > V2: make page_pool_dma_map return boolean (Ilias)
> > >   
> > 
> > [...]
> >   
> > > @@ -211,30 +234,14 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
> > >  	if (!page)
> > >  		return NULL;
> > >  
> > > -	if (!(pool->p.flags & PP_FLAG_DMA_MAP))
> > > -		goto skip_dma_map;
> > > -
> > > -	/* Setup DMA mapping: use 'struct page' area for storing DMA-addr
> > > -	 * since dma_addr_t can be either 32 or 64 bits and does not always fit
> > > -	 * into page private data (i.e 32bit cpu with 64bit DMA caps)
> > > -	 * This mapping is kept for lifetime of page, until leaving pool.
> > > -	 */
> > > -	dma = dma_map_page_attrs(pool->p.dev, page, 0,
> > > -				 (PAGE_SIZE << pool->p.order),
> > > -				 pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC);
> > > -	if (dma_mapping_error(pool->p.dev, dma)) {
> > > +	if (pp_flags & PP_FLAG_DMA_MAP &&  
> > 
> > Nit pick but can we have if ((pp_flags & PP_FLAG_DMA_MAP) && ...
> >   
> 
> Done.

Thanks for fixing this nitpick, and carrying the patch.
diff mbox series

Patch

diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index ad8b0707af04..a26f2ceb6a87 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -180,14 +180,37 @@  static void page_pool_dma_sync_for_device(struct page_pool *pool,
 					 pool->p.dma_dir);
 }
 
+static bool page_pool_dma_map(struct page_pool *pool, struct page *page)
+{
+	dma_addr_t dma;
+
+	/* Setup DMA mapping: use 'struct page' area for storing DMA-addr
+	 * since dma_addr_t can be either 32 or 64 bits and does not always fit
+	 * into page private data (i.e 32bit cpu with 64bit DMA caps)
+	 * This mapping is kept for lifetime of page, until leaving pool.
+	 */
+	dma = dma_map_page_attrs(pool->p.dev, page, 0,
+				 (PAGE_SIZE << pool->p.order),
+				 pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC);
+	if (dma_mapping_error(pool->p.dev, dma))
+		return false;
+
+	page->dma_addr = dma;
+
+	if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
+		page_pool_dma_sync_for_device(pool, page, pool->p.max_len);
+
+	return true;
+}
+
 /* slow path */
 noinline
 static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
 						 gfp_t _gfp)
 {
+	unsigned int pp_flags = pool->p.flags;
 	struct page *page;
 	gfp_t gfp = _gfp;
-	dma_addr_t dma;
 
 	/* We could always set __GFP_COMP, and avoid this branch, as
 	 * prep_new_page() can handle order-0 with __GFP_COMP.
@@ -211,30 +234,14 @@  static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
 	if (!page)
 		return NULL;
 
-	if (!(pool->p.flags & PP_FLAG_DMA_MAP))
-		goto skip_dma_map;
-
-	/* Setup DMA mapping: use 'struct page' area for storing DMA-addr
-	 * since dma_addr_t can be either 32 or 64 bits and does not always fit
-	 * into page private data (i.e 32bit cpu with 64bit DMA caps)
-	 * This mapping is kept for lifetime of page, until leaving pool.
-	 */
-	dma = dma_map_page_attrs(pool->p.dev, page, 0,
-				 (PAGE_SIZE << pool->p.order),
-				 pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC);
-	if (dma_mapping_error(pool->p.dev, dma)) {
+	if (pp_flags & PP_FLAG_DMA_MAP &&
+	    unlikely(!page_pool_dma_map(pool, page))) {
 		put_page(page);
 		return NULL;
 	}
-	page->dma_addr = dma;
 
-	if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
-		page_pool_dma_sync_for_device(pool, page, pool->p.max_len);
-
-skip_dma_map:
 	/* Track how many pages are held 'in-flight' */
 	pool->pages_state_hold_cnt++;
-
 	trace_page_pool_state_hold(pool, page, pool->pages_state_hold_cnt);
 
 	/* When page just alloc'ed is should/must have refcnt 1. */