Message ID | 20241203173733.3181246-2-aleksander.lobakin@intel.com (mailing list archive) |
---|---|
State | Accepted |
Commit | ca5c94949facce1f67a4a9a9528a27f635ff3e78 |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | xdp: a fistful of generic changes pt. I | expand |
Alexander Lobakin <aleksander.lobakin@intel.com> writes: > After the series "XSk buff on a diet" by Maciej, the greatest pow-2 > which &xdp_buff_xsk can be divided got reduced from 16 to 8 on x86_64. > Also, sizeof(xdp_buff_xsk) now is 120 bytes, which, taking the previous > sentence into account, leads to that it leaves 8 bytes at the end of > cacheline, which means an array of buffs will have its elements > messed between the cachelines chaotically. > Use __aligned_largest for this struct. This alignment is usually 16 > bytes, which makes it fill two full cachelines and align an array > nicely. ___cacheline_aligned may be excessive here, especially on > arches with 128-256 byte CLs, as well as 32-bit arches (76 -> 96 > bytes on MIPS32R2), while not doing better than _largest. > > Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Ohh, didn't know about that attribute - neat! Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>
diff --git a/include/net/xsk_buff_pool.h b/include/net/xsk_buff_pool.h index bb03cee716b3..7637799b6c19 100644 --- a/include/net/xsk_buff_pool.h +++ b/include/net/xsk_buff_pool.h @@ -29,7 +29,7 @@ struct xdp_buff_xsk { dma_addr_t frame_dma; struct xsk_buff_pool *pool; struct list_head list_node; -}; +} __aligned_largest; #define XSK_CHECK_PRIV_TYPE(t) BUILD_BUG_ON(sizeof(t) > offsetofend(struct xdp_buff_xsk, cb)) #define XSK_TX_COMPL_FITS(t) BUILD_BUG_ON(sizeof(struct xsk_tx_metadata_compl) > sizeof(t))
After the series "XSk buff on a diet" by Maciej, the greatest pow-2 which &xdp_buff_xsk can be divided got reduced from 16 to 8 on x86_64. Also, sizeof(xdp_buff_xsk) now is 120 bytes, which, taking the previous sentence into account, leads to that it leaves 8 bytes at the end of cacheline, which means an array of buffs will have its elements messed between the cachelines chaotically. Use __aligned_largest for this struct. This alignment is usually 16 bytes, which makes it fill two full cachelines and align an array nicely. ___cacheline_aligned may be excessive here, especially on arches with 128-256 byte CLs, as well as 32-bit arches (76 -> 96 bytes on MIPS32R2), while not doing better than _largest. Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> --- include/net/xsk_buff_pool.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)