diff mbox series

[net-next,07/11] net: page_pool: add DMA-sync-for-CPU inline helpers

Message ID 20230516161841.37138-8-aleksander.lobakin@intel.com (mailing list archive)
State Superseded
Delegated to: Netdev Maintainers
Headers show
Series net: intel: start The Great Code Dedup + Page Pool for iavf | expand

Checks

Context Check Description
netdev/series_format success Posting correctly formatted
netdev/tree_selection success Clearly marked for net-next, async
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 5176 this patch: 5176
netdev/cc_maintainers success CCed 7 of 7 maintainers
netdev/build_clang success Errors and warnings before: 1979 this patch: 1979
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 5407 this patch: 5407
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 83 lines checked
netdev/kdoc fail Errors and warnings before: 0 this patch: 1
netdev/source_inline success Was 0 now: 0

Commit Message

Alexander Lobakin May 16, 2023, 4:18 p.m. UTC
Each driver is responsible for syncing buffers written by HW for CPU
before accessing them. Almost each PP-enabled driver uses the same
pattern, which could be shorthanded into a static inline to make driver
code a little bit more compact.
Introduce a couple such functions. The first one takes the actual size
of the data written by HW and is the main one to be used on Rx. The
second does the same, but only if the PP performs DMA synchronizations
at all. The last one picks max_len from the PP params and is designed
for more extreme cases when the size is unknown, but the buffer still
needs to be synced.
Also constify pointer arguments of page_pool_get_dma_dir() and
page_pool_get_dma_addr() to give a bit more room for optimization,
as both of them are read-only.

Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
---
 include/net/page_pool.h | 59 ++++++++++++++++++++++++++++++++++++++---
 1 file changed, 55 insertions(+), 4 deletions(-)

Comments

Jakub Kicinski May 18, 2023, 4:12 a.m. UTC | #1
On Tue, 16 May 2023 18:18:37 +0200 Alexander Lobakin wrote:
> Each driver is responsible for syncing buffers written by HW for CPU
> before accessing them. Almost each PP-enabled driver uses the same
> pattern, which could be shorthanded into a static inline to make driver
> code a little bit more compact.
> Introduce a couple such functions. The first one takes the actual size
> of the data written by HW and is the main one to be used on Rx. The
> second does the same, but only if the PP performs DMA synchronizations
> at all. The last one picks max_len from the PP params and is designed
> for more extreme cases when the size is unknown, but the buffer still
> needs to be synced.
> Also constify pointer arguments of page_pool_get_dma_dir() and
> page_pool_get_dma_addr() to give a bit more room for optimization,
> as both of them are read-only.

Very neat.

> diff --git a/include/net/page_pool.h b/include/net/page_pool.h
> index 8435013de06e..f740c50b661f 100644
> --- a/include/net/page_pool.h
> +++ b/include/net/page_pool.h
> @@ -32,7 +32,7 @@
>  
>  #include <linux/mm.h> /* Needed by ptr_ring */
>  #include <linux/ptr_ring.h>
> -#include <linux/dma-direction.h>
> +#include <linux/dma-mapping.h>

highly nit picky - but isn't dma-mapping.h pretty heavy?
And we include page_pool.h in skbuff.h. Not that it matters
today, but maybe one day we'll succeed putting skbuff.h
on a diet -- so perhaps it's better to put "inline helpers
with non-trivial dependencies" into a new header?

>  #define PP_FLAG_DMA_MAP		BIT(0) /* Should page_pool do the DMA
>  					* map/unmap

> +/**
> + * page_pool_dma_sync_for_cpu - sync Rx page for CPU after it's written by HW
> + * @pool: page_pool which this page belongs to
> + * @page: page to sync
> + * @dma_sync_size: size of the data written to the page
> + *
> + * Can be used as a shorthand to sync Rx pages before accessing them in the
> + * driver. Caller must ensure the pool was created with %PP_FLAG_DMA_MAP.
> + */
> +static inline void page_pool_dma_sync_for_cpu(const struct page_pool *pool,
> +					      const struct page *page,
> +					      u32 dma_sync_size)
> +{
> +	dma_sync_single_range_for_cpu(pool->p.dev,
> +				      page_pool_get_dma_addr(page),
> +				      pool->p.offset, dma_sync_size,
> +				      page_pool_get_dma_dir(pool));

Likely a dumb question but why does this exist?
Is there a case where the "maybe" version is not safe?

> +}
> +
> +/**
> + * page_pool_dma_maybe_sync_for_cpu - sync Rx page for CPU if needed
> + * @pool: page_pool which this page belongs to
> + * @page: page to sync
> + * @dma_sync_size: size of the data written to the page
> + *
> + * Performs DMA sync for CPU, but only when required (swiotlb, IOMMU etc.).
> + */
> +static inline void
> +page_pool_dma_maybe_sync_for_cpu(const struct page_pool *pool,
> +				 const struct page *page, u32 dma_sync_size)
> +{
> +	if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
> +		page_pool_dma_sync_for_cpu(pool, page, dma_sync_size);
> +}
Jakub Kicinski May 18, 2023, 4:19 a.m. UTC | #2
On Tue, 16 May 2023 18:18:37 +0200 Alexander Lobakin wrote:
> +/**
> + * page_pool_dma_sync_for_cpu - sync full Rx page for CPU
...
> +page_pool_dma_sync_full_for_cpu(const struct page_pool *pool,

kdoc name mismatch
Ilias Apalodimas May 18, 2023, 7:03 a.m. UTC | #3
Hi all,

> On Wed, May 17, 2023 at 09:12:11PM -0700, Jakub Kicinski wrote:
> On Tue, 16 May 2023 18:18:37 +0200 Alexander Lobakin wrote:
> > Each driver is responsible for syncing buffers written by HW for CPU
> > before accessing them. Almost each PP-enabled driver uses the same
> > pattern, which could be shorthanded into a static inline to make driver
> > code a little bit more compact.
> > Introduce a couple such functions. The first one takes the actual size
> > of the data written by HW and is the main one to be used on Rx. The
> > second does the same, but only if the PP performs DMA synchronizations
> > at all. The last one picks max_len from the PP params and is designed
> > for more extreme cases when the size is unknown, but the buffer still
> > needs to be synced.
> > Also constify pointer arguments of page_pool_get_dma_dir() and
> > page_pool_get_dma_addr() to give a bit more room for optimization,
> > as both of them are read-only.
>
> Very neat.
>
> > diff --git a/include/net/page_pool.h b/include/net/page_pool.h
> > index 8435013de06e..f740c50b661f 100644
> > --- a/include/net/page_pool.h
> > +++ b/include/net/page_pool.h
> > @@ -32,7 +32,7 @@
> >
> >  #include <linux/mm.h> /* Needed by ptr_ring */
> >  #include <linux/ptr_ring.h>
> > -#include <linux/dma-direction.h>
> > +#include <linux/dma-mapping.h>
>
> highly nit picky - but isn't dma-mapping.h pretty heavy?
> And we include page_pool.h in skbuff.h. Not that it matters
> today, but maybe one day we'll succeed putting skbuff.h
> on a diet -- so perhaps it's better to put "inline helpers
> with non-trivial dependencies" into a new header?
>
> >  #define PP_FLAG_DMA_MAP		BIT(0) /* Should page_pool do the DMA
> >  					* map/unmap
>
> > +/**
> > + * page_pool_dma_sync_for_cpu - sync Rx page for CPU after it's written by HW
> > + * @pool: page_pool which this page belongs to
> > + * @page: page to sync
> > + * @dma_sync_size: size of the data written to the page
> > + *
> > + * Can be used as a shorthand to sync Rx pages before accessing them in the
> > + * driver. Caller must ensure the pool was created with %PP_FLAG_DMA_MAP.
> > + */
> > +static inline void page_pool_dma_sync_for_cpu(const struct page_pool *pool,
> > +					      const struct page *page,
> > +					      u32 dma_sync_size)
> > +{
> > +	dma_sync_single_range_for_cpu(pool->p.dev,
> > +				      page_pool_get_dma_addr(page),
> > +				      pool->p.offset, dma_sync_size,
> > +				      page_pool_get_dma_dir(pool));
>
> Likely a dumb question but why does this exist?
> Is there a case where the "maybe" version is not safe?
>

I got similar concerns here.  Syncing for the cpu is currently a
responsibility for the driver.  The reason for having an automated DMA sync
is that we know when we allocate buffers for the NIC to consume so we can
safely sync them accordingly.  I am fine having a page pool version for the
cpu sync, but do we really have to check the pp flags for that?  IOW if you
are at the point that you need to sync a buffer for the cpu *someone*
already mapped it for you.  Regardsless of who mapped it the sync is
identical

> > +}
> > +
> > +/**
> > + * page_pool_dma_maybe_sync_for_cpu - sync Rx page for CPU if needed
> > + * @pool: page_pool which this page belongs to
> > + * @page: page to sync
> > + * @dma_sync_size: size of the data written to the page
> > + *
> > + * Performs DMA sync for CPU, but only when required (swiotlb, IOMMU etc.).
> > + */
> > +static inline void
> > +page_pool_dma_maybe_sync_for_cpu(const struct page_pool *pool,
> > +				 const struct page *page, u32 dma_sync_size)
> > +{
> > +	if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
> > +		page_pool_dma_sync_for_cpu(pool, page, dma_sync_size);
> > +}

Thanks
/Ilias
Alexander Lobakin May 18, 2023, 1:45 p.m. UTC | #4
From: Jakub Kicinski <kuba@kernel.org>
Date: Wed, 17 May 2023 21:12:11 -0700

> On Tue, 16 May 2023 18:18:37 +0200 Alexander Lobakin wrote:
>> Each driver is responsible for syncing buffers written by HW for CPU
>> before accessing them. Almost each PP-enabled driver uses the same
>> pattern, which could be shorthanded into a static inline to make driver
>> code a little bit more compact.
>> Introduce a couple such functions. The first one takes the actual size
>> of the data written by HW and is the main one to be used on Rx. The
>> second does the same, but only if the PP performs DMA synchronizations
>> at all. The last one picks max_len from the PP params and is designed
>> for more extreme cases when the size is unknown, but the buffer still
>> needs to be synced.
>> Also constify pointer arguments of page_pool_get_dma_dir() and
>> page_pool_get_dma_addr() to give a bit more room for optimization,
>> as both of them are read-only.
> 
> Very neat.
> 
>> diff --git a/include/net/page_pool.h b/include/net/page_pool.h
>> index 8435013de06e..f740c50b661f 100644
>> --- a/include/net/page_pool.h
>> +++ b/include/net/page_pool.h
>> @@ -32,7 +32,7 @@
>>  
>>  #include <linux/mm.h> /* Needed by ptr_ring */
>>  #include <linux/ptr_ring.h>
>> -#include <linux/dma-direction.h>
>> +#include <linux/dma-mapping.h>
> 
> highly nit picky - but isn't dma-mapping.h pretty heavy?
> And we include page_pool.h in skbuff.h. Not that it matters
> today, but maybe one day we'll succeed putting skbuff.h
> on a diet -- so perhaps it's better to put "inline helpers
> with non-trivial dependencies" into a new header?

Maybe we could rather stop including page_pool.h into skbuff.h? It's
used there only for  1 external, which could be declared directly in
skbuff.h. When Matteo was developing PP recycling, he was storing
mem_info in skb as well, but then it was optimized and we don't do that
anymore.
It annoys sometimes to see the whole kernel rebuilt each time I edit
pag_pool.h :D In fact, only PP-enabled drivers and core code need it.

> 
>>  #define PP_FLAG_DMA_MAP		BIT(0) /* Should page_pool do the DMA
>>  					* map/unmap
> 
>> +/**
>> + * page_pool_dma_sync_for_cpu - sync Rx page for CPU after it's written by HW
>> + * @pool: page_pool which this page belongs to
>> + * @page: page to sync
>> + * @dma_sync_size: size of the data written to the page
>> + *
>> + * Can be used as a shorthand to sync Rx pages before accessing them in the
>> + * driver. Caller must ensure the pool was created with %PP_FLAG_DMA_MAP.
>> + */
>> +static inline void page_pool_dma_sync_for_cpu(const struct page_pool *pool,
>> +					      const struct page *page,
>> +					      u32 dma_sync_size)
>> +{
>> +	dma_sync_single_range_for_cpu(pool->p.dev,
>> +				      page_pool_get_dma_addr(page),
>> +				      pool->p.offset, dma_sync_size,
>> +				      page_pool_get_dma_dir(pool));
> 
> Likely a dumb question but why does this exist?
> Is there a case where the "maybe" version is not safe?

If the driver doesn't set DMA_SYNC_DEV flag, then the "maybe" version
will never do anything. But we may want to use these helpers in such
drivers too?

> 
>> +}
>> +
>> +/**
>> + * page_pool_dma_maybe_sync_for_cpu - sync Rx page for CPU if needed
>> + * @pool: page_pool which this page belongs to
>> + * @page: page to sync
>> + * @dma_sync_size: size of the data written to the page
>> + *
>> + * Performs DMA sync for CPU, but only when required (swiotlb, IOMMU etc.).
>> + */
>> +static inline void
>> +page_pool_dma_maybe_sync_for_cpu(const struct page_pool *pool,
>> +				 const struct page *page, u32 dma_sync_size)
>> +{
>> +	if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
>> +		page_pool_dma_sync_for_cpu(pool, page, dma_sync_size);
>> +}
Thanks,
Olek
Alexander Lobakin May 18, 2023, 1:53 p.m. UTC | #5
From: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Date: Thu, 18 May 2023 10:03:41 +0300

> Hi all,
> 
>> On Wed, May 17, 2023 at 09:12:11PM -0700, Jakub Kicinski wrote:
>> On Tue, 16 May 2023 18:18:37 +0200 Alexander Lobakin wrote:
>>> Each driver is responsible for syncing buffers written by HW for CPU
>>> before accessing them. Almost each PP-enabled driver uses the same
>>> pattern, which could be shorthanded into a static inline to make driver
>>> code a little bit more compact.

[...]

>>> +	dma_sync_single_range_for_cpu(pool->p.dev,
>>> +				      page_pool_get_dma_addr(page),
>>> +				      pool->p.offset, dma_sync_size,
>>> +				      page_pool_get_dma_dir(pool));
>>
>> Likely a dumb question but why does this exist?
>> Is there a case where the "maybe" version is not safe?
>>
> 
> I got similar concerns here.  Syncing for the cpu is currently a
> responsibility for the driver.  The reason for having an automated DMA sync
> is that we know when we allocate buffers for the NIC to consume so we can
> safely sync them accordingly.  I am fine having a page pool version for the
> cpu sync, but do we really have to check the pp flags for that?  IOW if you
> are at the point that you need to sync a buffer for the cpu *someone*
> already mapped it for you.  Regardsless of who mapped it the sync is
> identical

The flag in the "maybe" version is the continuation of the shortcut from
6/11. If the flag is not set, but you asked PP to do syncs, that means
it enabled the shortcut to not go through function call ladders for
nothing. The ladder is basically the same for sync-for-CPU as the one
described in 6/11 for sync-for-dev.
I could place that in the driver, but I feel like it's better to have
that one level up to reduce boilerplating.

> 
>>> +}
>>> +
>>> +/**
>>> + * page_pool_dma_maybe_sync_for_cpu - sync Rx page for CPU if needed
>>> + * @pool: page_pool which this page belongs to
>>> + * @page: page to sync
>>> + * @dma_sync_size: size of the data written to the page
>>> + *
>>> + * Performs DMA sync for CPU, but only when required (swiotlb, IOMMU etc.).
>>> + */
>>> +static inline void
>>> +page_pool_dma_maybe_sync_for_cpu(const struct page_pool *pool,
>>> +				 const struct page *page, u32 dma_sync_size)
>>> +{
>>> +	if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
>>> +		page_pool_dma_sync_for_cpu(pool, page, dma_sync_size);
>>> +}
> 
> Thanks
> /Ilias

Thanks,
Olek
Jakub Kicinski May 18, 2023, 2:56 p.m. UTC | #6
On Thu, 18 May 2023 15:45:33 +0200 Alexander Lobakin wrote:
> >> diff --git a/include/net/page_pool.h b/include/net/page_pool.h
> >> index 8435013de06e..f740c50b661f 100644
> >> --- a/include/net/page_pool.h
> >> +++ b/include/net/page_pool.h
> >> @@ -32,7 +32,7 @@
> >>  
> >>  #include <linux/mm.h> /* Needed by ptr_ring */
> >>  #include <linux/ptr_ring.h>
> >> -#include <linux/dma-direction.h>
> >> +#include <linux/dma-mapping.h>  
> > 
> > highly nit picky - but isn't dma-mapping.h pretty heavy?
> > And we include page_pool.h in skbuff.h. Not that it matters
> > today, but maybe one day we'll succeed putting skbuff.h
> > on a diet -- so perhaps it's better to put "inline helpers
> > with non-trivial dependencies" into a new header?  
> 
> Maybe we could rather stop including page_pool.h into skbuff.h? It's
> used there only for  1 external, which could be declared directly in
> skbuff.h. When Matteo was developing PP recycling, he was storing
> mem_info in skb as well, but then it was optimized and we don't do that
> anymore.
> It annoys sometimes to see the whole kernel rebuilt each time I edit
> pag_pool.h :D In fact, only PP-enabled drivers and core code need it.

Or maybe we can do both? I think that separating types, defines and
simple wrappers from helpers should be considered good code hygiene.

> >>  #define PP_FLAG_DMA_MAP		BIT(0) /* Should page_pool do the DMA
> >>  					* map/unmap  
> >   
> >> +/**
> >> + * page_pool_dma_sync_for_cpu - sync Rx page for CPU after it's written by HW
> >> + * @pool: page_pool which this page belongs to
> >> + * @page: page to sync
> >> + * @dma_sync_size: size of the data written to the page
> >> + *
> >> + * Can be used as a shorthand to sync Rx pages before accessing them in the
> >> + * driver. Caller must ensure the pool was created with %PP_FLAG_DMA_MAP.
> >> + */
> >> +static inline void page_pool_dma_sync_for_cpu(const struct page_pool *pool,
> >> +					      const struct page *page,
> >> +					      u32 dma_sync_size)
> >> +{
> >> +	dma_sync_single_range_for_cpu(pool->p.dev,
> >> +				      page_pool_get_dma_addr(page),
> >> +				      pool->p.offset, dma_sync_size,
> >> +				      page_pool_get_dma_dir(pool));  
> > 
> > Likely a dumb question but why does this exist?
> > Is there a case where the "maybe" version is not safe?  
> 
> If the driver doesn't set DMA_SYNC_DEV flag, then the "maybe" version
> will never do anything. But we may want to use these helpers in such
> drivers too?

Oh, I see, the polarity of the flag is awkward. Hm.
Maybe just rename things, drop the "maybe_" and prefix the non-checking
version with __ ? We expect drivers to call the version which check the
flag mostly (AFAIU), so it should have the most obvious name.
Plus perhaps a sentence in the kdoc explaining why __ exists would be
good, if it wasn't obvious to me it may not be obvious to others..
Alexander Lobakin May 18, 2023, 3:41 p.m. UTC | #7
From: Jakub Kicinski <kuba@kernel.org>
Date: Thu, 18 May 2023 07:56:43 -0700

> On Thu, 18 May 2023 15:45:33 +0200 Alexander Lobakin wrote:
>>>> diff --git a/include/net/page_pool.h b/include/net/page_pool.h
>>>> index 8435013de06e..f740c50b661f 100644
>>>> --- a/include/net/page_pool.h
>>>> +++ b/include/net/page_pool.h
>>>> @@ -32,7 +32,7 @@
>>>>  
>>>>  #include <linux/mm.h> /* Needed by ptr_ring */
>>>>  #include <linux/ptr_ring.h>
>>>> -#include <linux/dma-direction.h>
>>>> +#include <linux/dma-mapping.h>  
>>>
>>> highly nit picky - but isn't dma-mapping.h pretty heavy?
>>> And we include page_pool.h in skbuff.h. Not that it matters
>>> today, but maybe one day we'll succeed putting skbuff.h
>>> on a diet -- so perhaps it's better to put "inline helpers
>>> with non-trivial dependencies" into a new header?  
>>
>> Maybe we could rather stop including page_pool.h into skbuff.h? It's
>> used there only for  1 external, which could be declared directly in
>> skbuff.h. When Matteo was developing PP recycling, he was storing
>> mem_info in skb as well, but then it was optimized and we don't do that
>> anymore.
>> It annoys sometimes to see the whole kernel rebuilt each time I edit
>> pag_pool.h :D In fact, only PP-enabled drivers and core code need it.
> 
> Or maybe we can do both? I think that separating types, defines and
> simple wrappers from helpers should be considered good code hygiene.

I'll definitely take a look, I also like the idea of minimalistic and
lightweight headers.
page_pool.h and page_pool_drv.h? :D

> 
>>>>  #define PP_FLAG_DMA_MAP		BIT(0) /* Should page_pool do the DMA
>>>>  					* map/unmap  
>>>   
>>>> +/**
>>>> + * page_pool_dma_sync_for_cpu - sync Rx page for CPU after it's written by HW
>>>> + * @pool: page_pool which this page belongs to
>>>> + * @page: page to sync
>>>> + * @dma_sync_size: size of the data written to the page
>>>> + *
>>>> + * Can be used as a shorthand to sync Rx pages before accessing them in the
>>>> + * driver. Caller must ensure the pool was created with %PP_FLAG_DMA_MAP.
>>>> + */
>>>> +static inline void page_pool_dma_sync_for_cpu(const struct page_pool *pool,
>>>> +					      const struct page *page,
>>>> +					      u32 dma_sync_size)
>>>> +{
>>>> +	dma_sync_single_range_for_cpu(pool->p.dev,
>>>> +				      page_pool_get_dma_addr(page),
>>>> +				      pool->p.offset, dma_sync_size,
>>>> +				      page_pool_get_dma_dir(pool));  
>>>
>>> Likely a dumb question but why does this exist?
>>> Is there a case where the "maybe" version is not safe?  
>>
>> If the driver doesn't set DMA_SYNC_DEV flag, then the "maybe" version
>> will never do anything. But we may want to use these helpers in such
>> drivers too?
> 
> Oh, I see, the polarity of the flag is awkward. Hm.
> Maybe just rename things, drop the "maybe_" and prefix the non-checking
> version with __ ? We expect drivers to call the version which check the
> flag mostly (AFAIU), so it should have the most obvious name.
> Plus perhaps a sentence in the kdoc explaining why __ exists would be
> good, if it wasn't obvious to me it may not be obvious to others..

Ah, +, good point.

Thanks,
Olek
Jakub Kicinski May 18, 2023, 8:36 p.m. UTC | #8
On Thu, 18 May 2023 17:41:52 +0200 Alexander Lobakin wrote:
> > Or maybe we can do both? I think that separating types, defines and
> > simple wrappers from helpers should be considered good code hygiene.  
> 
> I'll definitely take a look, I also like the idea of minimalistic and
> lightweight headers.
> page_pool.h and page_pool_drv.h? :D

What I've been doing lately is split like this:

include/net/something.h           (simply includes all other headers)
include/net/something/types.h     (structs, defines, enums)
include/net/something/functions.h (inlines and function declarations)

If that's reasonable -- we should put the helpers under

include/net/page_pool/functions.h ?
Alexander Lobakin May 19, 2023, 1:56 p.m. UTC | #9
From: Jakub Kicinski <kuba@kernel.org>
Date: Thu, 18 May 2023 13:36:27 -0700

> On Thu, 18 May 2023 17:41:52 +0200 Alexander Lobakin wrote:
>>> Or maybe we can do both? I think that separating types, defines and
>>> simple wrappers from helpers should be considered good code hygiene.  
>>
>> I'll definitely take a look, I also like the idea of minimalistic and
>> lightweight headers.
>> page_pool.h and page_pool_drv.h? :D
> 
> What I've been doing lately is split like this:
> 
> include/net/something.h           (simply includes all other headers)
> include/net/something/types.h     (structs, defines, enums)
> include/net/something/functions.h (inlines and function declarations)
> 
> If that's reasonable -- we should put the helpers under
> 
> include/net/page_pool/functions.h ?

Hmm, all files that need something from page_pool.h usually need both
types and functions. Not sure we'll benefit anything here. OTOH leaving
those sync-for-cpu inlines alone allows to avoid including dma-mapping.h
and currently only IAVF needs them. So my idea is:

- you need smth from PP, but not sync-for-cpu -- more lightweight
  page_pool.h is for you;
- you need sync-for-cpu (or maybe something else with heavy deps in the
  future) -- just include page_pool_drv.h.

I tried moving something else, but couldn't find anything that would
give any win. <linux/mm.h> and <linux/ptr_ring.h> are needed to define
`struct page_pool`, i.e. even being structured like in your example they
would've gone into pp/types.h =\
`struct ptr_ring` itself doesn't require any MM-related definitions, so
would we split it into ptr_ring/{types,functions}.h, we could probably
avoid a couple includes :D

Thanks,
Olek
Jakub Kicinski May 19, 2023, 8:45 p.m. UTC | #10
On Fri, 19 May 2023 15:56:40 +0200 Alexander Lobakin wrote:
> From: Jakub Kicinski <kuba@kernel.org>
> Date: Thu, 18 May 2023 13:36:27 -0700
> >> I'll definitely take a look, I also like the idea of minimalistic and
> >> lightweight headers.
> >> page_pool.h and page_pool_drv.h? :D  
> > 
> > What I've been doing lately is split like this:
> > 
> > include/net/something.h           (simply includes all other headers)
> > include/net/something/types.h     (structs, defines, enums)
> > include/net/something/functions.h (inlines and function declarations)
> > 
> > If that's reasonable -- we should put the helpers under
> > 
> > include/net/page_pool/functions.h ?  
> 
> Hmm, all files that need something from page_pool.h usually need both
> types and functions. Not sure we'll benefit anything here.

Ack, in the scheme above most places (source files) would include
something.h, the something/types.h is just for other headers.
something/functions.h is basically never included directly.

> OTOH leaving
> those sync-for-cpu inlines alone allows to avoid including dma-mapping.h
> and currently only IAVF needs them. So my idea is:
> 
> - you need smth from PP, but not sync-for-cpu -- more lightweight
>   page_pool.h is for you;
> - you need sync-for-cpu (or maybe something else with heavy deps in the
>   future) -- just include page_pool_drv.h.

The idea makes sense in isolation, but I'm trying to figure out
a convention which would not require case-by-case discussions.

> I tried moving something else, but couldn't find anything that would
> give any win. <linux/mm.h> and <linux/ptr_ring.h> are needed to define
> `struct page_pool`, i.e. even being structured like in your example they
> would've gone into pp/types.h =\
> `struct ptr_ring` itself doesn't require any MM-related definitions, so
> would we split it into ptr_ring/{types,functions}.h, we could probably
> avoid a couple includes :D

Ack, not saying that we need to split now, it's just about the naming
(everyone's favorite topic).

I think that it's a touch weird to name the header _drv.h and then
include it in the core in multiple places (*cough* xdp_sock_drv.h). 
Also If someone needs to add another "heavy" static line for use by 
the core they will try to put it in page_pool.h rather than _drv.h...

I'd rather split the includes by the basic language-level contents,
first, then by the intended consumer, only if necessary. Language 
level sorting require less thinking :)

But none of this is important, if you don't wanna to do it, just keep 
the new helpers in page_pool.h (let's not do another _drv.h).
Alexander Lobakin May 22, 2023, 1:48 p.m. UTC | #11
From: Jakub Kicinski <kuba@kernel.org>
Date: Fri, 19 May 2023 13:45:45 -0700

> On Fri, 19 May 2023 15:56:40 +0200 Alexander Lobakin wrote:
>> From: Jakub Kicinski <kuba@kernel.org>
>> Date: Thu, 18 May 2023 13:36:27 -0700

[...]

> Ack, not saying that we need to split now, it's just about the naming
> (everyone's favorite topic).
> 
> I think that it's a touch weird to name the header _drv.h and then
> include it in the core in multiple places (*cough* xdp_sock_drv.h). 

Hahaha, I also thought of it :>

> Also If someone needs to add another "heavy" static line for use by 
> the core they will try to put it in page_pool.h rather than _drv.h...
> 
> I'd rather split the includes by the basic language-level contents,
> first, then by the intended consumer, only if necessary. Language 
> level sorting require less thinking :)
> 
> But none of this is important, if you don't wanna to do it, just keep 
> the new helpers in page_pool.h (let's not do another _drv.h).

Ack, will just put there. It doesn't get included by the whole kernel
via skbuff.h anymore (in v2), so new inlines won't hurt.

Thanks,
Olek
Magnus Karlsson May 22, 2023, 3:27 p.m. UTC | #12
On Mon, 22 May 2023 at 15:50, Alexander Lobakin
<aleksander.lobakin@intel.com> wrote:
>
> From: Jakub Kicinski <kuba@kernel.org>
> Date: Fri, 19 May 2023 13:45:45 -0700
>
> > On Fri, 19 May 2023 15:56:40 +0200 Alexander Lobakin wrote:
> >> From: Jakub Kicinski <kuba@kernel.org>
> >> Date: Thu, 18 May 2023 13:36:27 -0700
>
> [...]
>
> > Ack, not saying that we need to split now, it's just about the naming
> > (everyone's favorite topic).
> >
> > I think that it's a touch weird to name the header _drv.h and then
> > include it in the core in multiple places (*cough* xdp_sock_drv.h).
>
> Hahaha, I also thought of it :>

Haha! Point taken. Will clear it up.

> > Also If someone needs to add another "heavy" static line for use by
> > the core they will try to put it in page_pool.h rather than _drv.h...
> >
> > I'd rather split the includes by the basic language-level contents,
> > first, then by the intended consumer, only if necessary. Language
> > level sorting require less thinking :)
> >
> > But none of this is important, if you don't wanna to do it, just keep
> > the new helpers in page_pool.h (let's not do another _drv.h).
>
> Ack, will just put there. It doesn't get included by the whole kernel
> via skbuff.h anymore (in v2), so new inlines won't hurt.
>
> Thanks,
> Olek
> _______________________________________________
> Intel-wired-lan mailing list
> Intel-wired-lan@osuosl.org
> https://lists.osuosl.org/mailman/listinfo/intel-wired-lan
diff mbox series

Patch

diff --git a/include/net/page_pool.h b/include/net/page_pool.h
index 8435013de06e..f740c50b661f 100644
--- a/include/net/page_pool.h
+++ b/include/net/page_pool.h
@@ -32,7 +32,7 @@ 
 
 #include <linux/mm.h> /* Needed by ptr_ring */
 #include <linux/ptr_ring.h>
-#include <linux/dma-direction.h>
+#include <linux/dma-mapping.h>
 
 #define PP_FLAG_DMA_MAP		BIT(0) /* Should page_pool do the DMA
 					* map/unmap
@@ -237,8 +237,8 @@  static inline struct page *page_pool_dev_alloc_frag(struct page_pool *pool,
 /* get the stored dma direction. A driver might decide to treat this locally and
  * avoid the extra cache line from page_pool to determine the direction
  */
-static
-inline enum dma_data_direction page_pool_get_dma_dir(struct page_pool *pool)
+static inline enum dma_data_direction
+page_pool_get_dma_dir(const struct page_pool *pool)
 {
 	return pool->p.dma_dir;
 }
@@ -363,7 +363,7 @@  static inline void page_pool_recycle_direct(struct page_pool *pool,
 #define PAGE_POOL_DMA_USE_PP_FRAG_COUNT	\
 		(sizeof(dma_addr_t) > sizeof(unsigned long))
 
-static inline dma_addr_t page_pool_get_dma_addr(struct page *page)
+static inline dma_addr_t page_pool_get_dma_addr(const struct page *page)
 {
 	dma_addr_t ret = page->dma_addr;
 
@@ -380,6 +380,57 @@  static inline void page_pool_set_dma_addr(struct page *page, dma_addr_t addr)
 		page->dma_addr_upper = upper_32_bits(addr);
 }
 
+/**
+ * page_pool_dma_sync_for_cpu - sync Rx page for CPU after it's written by HW
+ * @pool: page_pool which this page belongs to
+ * @page: page to sync
+ * @dma_sync_size: size of the data written to the page
+ *
+ * Can be used as a shorthand to sync Rx pages before accessing them in the
+ * driver. Caller must ensure the pool was created with %PP_FLAG_DMA_MAP.
+ */
+static inline void page_pool_dma_sync_for_cpu(const struct page_pool *pool,
+					      const struct page *page,
+					      u32 dma_sync_size)
+{
+	dma_sync_single_range_for_cpu(pool->p.dev,
+				      page_pool_get_dma_addr(page),
+				      pool->p.offset, dma_sync_size,
+				      page_pool_get_dma_dir(pool));
+}
+
+/**
+ * page_pool_dma_maybe_sync_for_cpu - sync Rx page for CPU if needed
+ * @pool: page_pool which this page belongs to
+ * @page: page to sync
+ * @dma_sync_size: size of the data written to the page
+ *
+ * Performs DMA sync for CPU, but only when required (swiotlb, IOMMU etc.).
+ */
+static inline void
+page_pool_dma_maybe_sync_for_cpu(const struct page_pool *pool,
+				 const struct page *page, u32 dma_sync_size)
+{
+	if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
+		page_pool_dma_sync_for_cpu(pool, page, dma_sync_size);
+}
+
+/**
+ * page_pool_dma_sync_for_cpu - sync full Rx page for CPU
+ * @pool: page_pool which this page belongs to
+ * @page: page to sync
+ *
+ * Performs sync for the entire length exposed to hardware. Can be used on
+ * DMA errors or before freeing the page, when it's unknown whether the HW
+ * touched the buffer.
+ */
+static inline void
+page_pool_dma_sync_full_for_cpu(const struct page_pool *pool,
+				const struct page *page)
+{
+	page_pool_dma_sync_for_cpu(pool, page, pool->p.max_len);
+}
+
 static inline bool is_page_pool_compiled_in(void)
 {
 #ifdef CONFIG_PAGE_POOL