Message ID | 20241203173733.3181246-8-aleksander.lobakin@intel.com (mailing list archive) |
---|---|
State | Accepted |
Commit | 9bd9f72a74344b54cfb6fcabf1173e6c6e5c6952 |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | xdp: a fistful of generic changes pt. I | expand |
Alexander Lobakin <aleksander.lobakin@intel.com> writes: > Add the following netmem counterparts: > > * virt_to_netmem() -- simple page_to_netmem(virt_to_page()) wrapper; > * netmem_is_pfmemalloc() -- page_is_pfmemalloc() for page-backed > netmems, false otherwise; > > and the following "unsafe" versions: > > * __netmem_to_page() > * __netmem_get_pp() > * __netmem_address() > > They do the same as their non-underscored buddies, but assume the netmem > is always page-backed. When working with header &page_pools, you don't > need to check whether netmem belongs to the host memory and you can > never get NULL instead of &page. Checks for the LSB, clearing the LSB, > branches take cycles and increase object code size, sometimes > significantly. When you're sure your PP is always host, you can avoid > this by using the underscored counterparts. > > Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Makes sense to have these as helpers, spelling out the constraints Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>
On Tue, Dec 3, 2024 at 9:43 AM Alexander Lobakin <aleksander.lobakin@intel.com> wrote: > > Add the following netmem counterparts: > > * virt_to_netmem() -- simple page_to_netmem(virt_to_page()) wrapper; > * netmem_is_pfmemalloc() -- page_is_pfmemalloc() for page-backed > netmems, false otherwise; > > and the following "unsafe" versions: > > * __netmem_to_page() > * __netmem_get_pp() > * __netmem_address() > > They do the same as their non-underscored buddies, but assume the netmem > is always page-backed. When working with header &page_pools, you don't > need to check whether netmem belongs to the host memory and you can > never get NULL instead of &page. Checks for the LSB, clearing the LSB, > branches take cycles and increase object code size, sometimes > significantly. When you're sure your PP is always host, you can avoid > this by using the underscored counterparts. > I can imagine future use cases where net_iov netmems are used for headers. I'm thinking of a memory provider backed by hugepages (Eric/Jakub's idea). In that case the netmem may be backed by a tail page underneath or may be backed by net_iov that happens to be readable. But if these ideas ever materialize, we can always revisit this. Some suggestions for consideration below but either way: Reviewed-by: Mina Almasry <almasrymina@google.com> > Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> > --- > include/net/netmem.h | 78 ++++++++++++++++++++++++++++++++++++++++++-- > 1 file changed, 76 insertions(+), 2 deletions(-) > > diff --git a/include/net/netmem.h b/include/net/netmem.h > index 8a6e20be4b9d..1b58faa4f20f 100644 > --- a/include/net/netmem.h > +++ b/include/net/netmem.h > @@ -72,6 +72,22 @@ static inline bool netmem_is_net_iov(const netmem_ref netmem) > return (__force unsigned long)netmem & NET_IOV; > } > > +/** > + * __netmem_to_page - unsafely get pointer to the &page backing @netmem > + * @netmem: netmem reference to convert > + * > + * Unsafe version of netmem_to_page(). When @netmem is always page-backed, > + * e.g. when it's a header buffer, performs faster and generates smaller > + * object code (no check for the LSB, no WARN). When @netmem points to IOV, > + * provokes undefined behaviour. > + * > + * Return: pointer to the &page (garbage if @netmem is not page-backed). > + */ > +static inline struct page *__netmem_to_page(netmem_ref netmem) > +{ > + return (__force struct page *)netmem; > +} > + nit: I would name it netmem_to_page_unsafe(), just for glaringly obvious clarity. > /* This conversion fails (returns NULL) if the netmem_ref is not struct page > * backed. > */ > @@ -80,7 +96,7 @@ static inline struct page *netmem_to_page(netmem_ref netmem) > if (WARN_ON_ONCE(netmem_is_net_iov(netmem))) > return NULL; > > - return (__force struct page *)netmem; > + return __netmem_to_page(netmem); > } > > static inline struct net_iov *netmem_to_net_iov(netmem_ref netmem) > @@ -103,6 +119,17 @@ static inline netmem_ref page_to_netmem(struct page *page) > return (__force netmem_ref)page; > } > > +/** > + * virt_to_netmem - convert virtual memory pointer to a netmem reference > + * @data: host memory pointer to convert > + * > + * Return: netmem reference to the &page backing this virtual address. > + */ > +static inline netmem_ref virt_to_netmem(const void *data) > +{ > + return page_to_netmem(virt_to_page(data)); > +} > + > static inline int netmem_ref_count(netmem_ref netmem) > { > /* The non-pp refcount of net_iov is always 1. On net_iov, we only > @@ -127,6 +154,22 @@ static inline struct net_iov *__netmem_clear_lsb(netmem_ref netmem) > return (struct net_iov *)((__force unsigned long)netmem & ~NET_IOV); > } > > +/** > + * __netmem_get_pp - unsafely get pointer to the &page_pool backing @netmem > + * @netmem: netmem reference to get the pointer from > + * > + * Unsafe version of netmem_get_pp(). When @netmem is always page-backed, > + * e.g. when it's a header buffer, performs faster and generates smaller > + * object code (avoids clearing the LSB). When @netmem points to IOV, > + * provokes invalid memory access. > + * > + * Return: pointer to the &page_pool (garbage if @netmem is not page-backed). > + */ > +static inline struct page_pool *__netmem_get_pp(netmem_ref netmem) > +{ > + return __netmem_to_page(netmem)->pp; > +} > + > static inline struct page_pool *netmem_get_pp(netmem_ref netmem) > { > return __netmem_clear_lsb(netmem)->pp; > @@ -158,12 +201,43 @@ static inline netmem_ref netmem_compound_head(netmem_ref netmem) > return page_to_netmem(compound_head(netmem_to_page(netmem))); > } > > +/** > + * __netmem_address - unsafely get pointer to the memory backing @netmem > + * @netmem: netmem reference to get the pointer for > + * > + * Unsafe version of netmem_address(). When @netmem is always page-backed, > + * e.g. when it's a header buffer, performs faster and generates smaller > + * object code (no check for the LSB). When @netmem points to IOV, provokes > + * undefined behaviour. > + * > + * Return: pointer to the memory (garbage if @netmem is not page-backed). > + */ > +static inline void *__netmem_address(netmem_ref netmem) > +{ > + return page_address(__netmem_to_page(netmem)); > +} > + > static inline void *netmem_address(netmem_ref netmem) > { > if (netmem_is_net_iov(netmem)) > return NULL; > > - return page_address(netmem_to_page(netmem)); > + return __netmem_address(netmem); > +} > + > +/** > + * netmem_is_pfmemalloc - check if @netmem was allocated under memory pressure > + * @netmem: netmem reference to check > + * > + * Return: true if @netmem is page-backed and the page was allocated under > + * memory pressure, false otherwise. > + */ > +static inline bool netmem_is_pfmemalloc(netmem_ref netmem) > +{ > + if (netmem_is_net_iov(netmem)) > + return false; > + > + return page_is_pfmemalloc(netmem_to_page(netmem)); > } Is it better to add these pfmemalloc/address helpers, or better to do: page = netmem_to_page_unsafe(netmem); page_is_pfmemalloc(page); page_address(page); Sure the latter is a bit more of a pain to type, but is arguably more elegant than having to add *_unsafe() versions of all the netmem helpers eventually. -- Thanks, Mina
diff --git a/include/net/netmem.h b/include/net/netmem.h index 8a6e20be4b9d..1b58faa4f20f 100644 --- a/include/net/netmem.h +++ b/include/net/netmem.h @@ -72,6 +72,22 @@ static inline bool netmem_is_net_iov(const netmem_ref netmem) return (__force unsigned long)netmem & NET_IOV; } +/** + * __netmem_to_page - unsafely get pointer to the &page backing @netmem + * @netmem: netmem reference to convert + * + * Unsafe version of netmem_to_page(). When @netmem is always page-backed, + * e.g. when it's a header buffer, performs faster and generates smaller + * object code (no check for the LSB, no WARN). When @netmem points to IOV, + * provokes undefined behaviour. + * + * Return: pointer to the &page (garbage if @netmem is not page-backed). + */ +static inline struct page *__netmem_to_page(netmem_ref netmem) +{ + return (__force struct page *)netmem; +} + /* This conversion fails (returns NULL) if the netmem_ref is not struct page * backed. */ @@ -80,7 +96,7 @@ static inline struct page *netmem_to_page(netmem_ref netmem) if (WARN_ON_ONCE(netmem_is_net_iov(netmem))) return NULL; - return (__force struct page *)netmem; + return __netmem_to_page(netmem); } static inline struct net_iov *netmem_to_net_iov(netmem_ref netmem) @@ -103,6 +119,17 @@ static inline netmem_ref page_to_netmem(struct page *page) return (__force netmem_ref)page; } +/** + * virt_to_netmem - convert virtual memory pointer to a netmem reference + * @data: host memory pointer to convert + * + * Return: netmem reference to the &page backing this virtual address. + */ +static inline netmem_ref virt_to_netmem(const void *data) +{ + return page_to_netmem(virt_to_page(data)); +} + static inline int netmem_ref_count(netmem_ref netmem) { /* The non-pp refcount of net_iov is always 1. On net_iov, we only @@ -127,6 +154,22 @@ static inline struct net_iov *__netmem_clear_lsb(netmem_ref netmem) return (struct net_iov *)((__force unsigned long)netmem & ~NET_IOV); } +/** + * __netmem_get_pp - unsafely get pointer to the &page_pool backing @netmem + * @netmem: netmem reference to get the pointer from + * + * Unsafe version of netmem_get_pp(). When @netmem is always page-backed, + * e.g. when it's a header buffer, performs faster and generates smaller + * object code (avoids clearing the LSB). When @netmem points to IOV, + * provokes invalid memory access. + * + * Return: pointer to the &page_pool (garbage if @netmem is not page-backed). + */ +static inline struct page_pool *__netmem_get_pp(netmem_ref netmem) +{ + return __netmem_to_page(netmem)->pp; +} + static inline struct page_pool *netmem_get_pp(netmem_ref netmem) { return __netmem_clear_lsb(netmem)->pp; @@ -158,12 +201,43 @@ static inline netmem_ref netmem_compound_head(netmem_ref netmem) return page_to_netmem(compound_head(netmem_to_page(netmem))); } +/** + * __netmem_address - unsafely get pointer to the memory backing @netmem + * @netmem: netmem reference to get the pointer for + * + * Unsafe version of netmem_address(). When @netmem is always page-backed, + * e.g. when it's a header buffer, performs faster and generates smaller + * object code (no check for the LSB). When @netmem points to IOV, provokes + * undefined behaviour. + * + * Return: pointer to the memory (garbage if @netmem is not page-backed). + */ +static inline void *__netmem_address(netmem_ref netmem) +{ + return page_address(__netmem_to_page(netmem)); +} + static inline void *netmem_address(netmem_ref netmem) { if (netmem_is_net_iov(netmem)) return NULL; - return page_address(netmem_to_page(netmem)); + return __netmem_address(netmem); +} + +/** + * netmem_is_pfmemalloc - check if @netmem was allocated under memory pressure + * @netmem: netmem reference to check + * + * Return: true if @netmem is page-backed and the page was allocated under + * memory pressure, false otherwise. + */ +static inline bool netmem_is_pfmemalloc(netmem_ref netmem) +{ + if (netmem_is_net_iov(netmem)) + return false; + + return page_is_pfmemalloc(netmem_to_page(netmem)); } static inline unsigned long netmem_get_dma_addr(netmem_ref netmem)
Add the following netmem counterparts: * virt_to_netmem() -- simple page_to_netmem(virt_to_page()) wrapper; * netmem_is_pfmemalloc() -- page_is_pfmemalloc() for page-backed netmems, false otherwise; and the following "unsafe" versions: * __netmem_to_page() * __netmem_get_pp() * __netmem_address() They do the same as their non-underscored buddies, but assume the netmem is always page-backed. When working with header &page_pools, you don't need to check whether netmem belongs to the host memory and you can never get NULL instead of &page. Checks for the LSB, clearing the LSB, branches take cycles and increase object code size, sometimes significantly. When you're sure your PP is always host, you can avoid this by using the underscored counterparts. Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> --- include/net/netmem.h | 78 ++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 76 insertions(+), 2 deletions(-)