Message ID | 161419300107.2718959.18302883670835746249.stgit@firesoul (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | [RFC,net-next,1/3] net: page_pool: refactor dma_map into own function page_pool_dma_map | expand |
Context | Check | Description |
---|---|---|
netdev/cover_letter | warning | Series does not have a cover letter |
netdev/fixes_present | success | Link |
netdev/patch_count | success | Link |
netdev/tree_selection | success | Clearly marked for net-next |
netdev/subject_prefix | success | Link |
netdev/cc_maintainers | fail | 4 maintainers not CCed: hawk@kernel.org davem@davemloft.net kuba@kernel.org ilias.apalodimas@linaro.org |
netdev/source_inline | success | Was 0 now: 0 |
netdev/verify_signedoff | success | Link |
netdev/module_param | success | Was 0 now: 0 |
netdev/build_32bit | success | Errors and warnings before: 1 this patch: 1 |
netdev/kdoc | success | Errors and warnings before: 0 this patch: 0 |
netdev/verify_fixes | success | Link |
netdev/checkpatch | fail | ERROR: "foo * bar" should be "foo *bar" |
netdev/build_allmodconfig_warn | success | Errors and warnings before: 1 this patch: 1 |
netdev/header_inline | success | Link |
netdev/stable | success | Stable not CCed |
On Wed, Feb 24, 2021 at 07:56:41PM +0100, Jesper Dangaard Brouer wrote: > In preparation for next patch, move the dma mapping into its own > function, as this will make it easier to follow the changes. > > Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> > --- > net/core/page_pool.c | 49 +++++++++++++++++++++++++++++-------------------- > 1 file changed, 29 insertions(+), 20 deletions(-) > > diff --git a/net/core/page_pool.c b/net/core/page_pool.c > index ad8b0707af04..50d52aa6fbeb 100644 > --- a/net/core/page_pool.c > +++ b/net/core/page_pool.c > @@ -180,6 +180,31 @@ static void page_pool_dma_sync_for_device(struct page_pool *pool, > pool->p.dma_dir); > } > > +static struct page * page_pool_dma_map(struct page_pool *pool, > + struct page *page) > +{ Why return a struct page* ? boolean maybe? > + dma_addr_t dma; > + > + /* Setup DMA mapping: use 'struct page' area for storing DMA-addr > + * since dma_addr_t can be either 32 or 64 bits and does not always fit > + * into page private data (i.e 32bit cpu with 64bit DMA caps) > + * This mapping is kept for lifetime of page, until leaving pool. > + */ > + dma = dma_map_page_attrs(pool->p.dev, page, 0, > + (PAGE_SIZE << pool->p.order), > + pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC); > + if (dma_mapping_error(pool->p.dev, dma)) { > + put_page(page); This is a bit confusing when reading it. The name of the function should try to map the page and report a yes/no, instead of trying to call put_page as well. Can't we explicitly ask the user to call put_page() if the mapping failed? A clear example is on patch 2/3, when on the first read I was convinced there was a memory leak. > + return NULL; > + } > + page->dma_addr = dma; > + > + if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) > + page_pool_dma_sync_for_device(pool, page, pool->p.max_len); > + > + return page; > +} > + > /* slow path */ > noinline > static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, > @@ -187,7 +212,6 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, > { > struct page *page; > gfp_t gfp = _gfp; > - dma_addr_t dma; > > /* We could always set __GFP_COMP, and avoid this branch, as > * prep_new_page() can handle order-0 with __GFP_COMP. > @@ -211,27 +235,12 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, > if (!page) > return NULL; > > - if (!(pool->p.flags & PP_FLAG_DMA_MAP)) > - goto skip_dma_map; > - > - /* Setup DMA mapping: use 'struct page' area for storing DMA-addr > - * since dma_addr_t can be either 32 or 64 bits and does not always fit > - * into page private data (i.e 32bit cpu with 64bit DMA caps) > - * This mapping is kept for lifetime of page, until leaving pool. > - */ > - dma = dma_map_page_attrs(pool->p.dev, page, 0, > - (PAGE_SIZE << pool->p.order), > - pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC); > - if (dma_mapping_error(pool->p.dev, dma)) { > - put_page(page); > - return NULL; > + if (pool->p.flags & PP_FLAG_DMA_MAP) { > + page = page_pool_dma_map(pool, page); > + if (!page) > + return NULL; > } > - page->dma_addr = dma; > - > - if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) > - page_pool_dma_sync_for_device(pool, page, pool->p.max_len); > > -skip_dma_map: > /* Track how many pages are held 'in-flight' */ > pool->pages_state_hold_cnt++; > > > Thanks! /Ilias
diff --git a/net/core/page_pool.c b/net/core/page_pool.c index ad8b0707af04..50d52aa6fbeb 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -180,6 +180,31 @@ static void page_pool_dma_sync_for_device(struct page_pool *pool, pool->p.dma_dir); } +static struct page * page_pool_dma_map(struct page_pool *pool, + struct page *page) +{ + dma_addr_t dma; + + /* Setup DMA mapping: use 'struct page' area for storing DMA-addr + * since dma_addr_t can be either 32 or 64 bits and does not always fit + * into page private data (i.e 32bit cpu with 64bit DMA caps) + * This mapping is kept for lifetime of page, until leaving pool. + */ + dma = dma_map_page_attrs(pool->p.dev, page, 0, + (PAGE_SIZE << pool->p.order), + pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC); + if (dma_mapping_error(pool->p.dev, dma)) { + put_page(page); + return NULL; + } + page->dma_addr = dma; + + if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) + page_pool_dma_sync_for_device(pool, page, pool->p.max_len); + + return page; +} + /* slow path */ noinline static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, @@ -187,7 +212,6 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, { struct page *page; gfp_t gfp = _gfp; - dma_addr_t dma; /* We could always set __GFP_COMP, and avoid this branch, as * prep_new_page() can handle order-0 with __GFP_COMP. @@ -211,27 +235,12 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, if (!page) return NULL; - if (!(pool->p.flags & PP_FLAG_DMA_MAP)) - goto skip_dma_map; - - /* Setup DMA mapping: use 'struct page' area for storing DMA-addr - * since dma_addr_t can be either 32 or 64 bits and does not always fit - * into page private data (i.e 32bit cpu with 64bit DMA caps) - * This mapping is kept for lifetime of page, until leaving pool. - */ - dma = dma_map_page_attrs(pool->p.dev, page, 0, - (PAGE_SIZE << pool->p.order), - pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC); - if (dma_mapping_error(pool->p.dev, dma)) { - put_page(page); - return NULL; + if (pool->p.flags & PP_FLAG_DMA_MAP) { + page = page_pool_dma_map(pool, page); + if (!page) + return NULL; } - page->dma_addr = dma; - - if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) - page_pool_dma_sync_for_device(pool, page, pool->p.max_len); -skip_dma_map: /* Track how many pages are held 'in-flight' */ pool->pages_state_hold_cnt++;
In preparation for next patch, move the dma mapping into its own function, as this will make it easier to follow the changes. Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> --- net/core/page_pool.c | 49 +++++++++++++++++++++++++++++-------------------- 1 file changed, 29 insertions(+), 20 deletions(-)