mbox series

[net-next,0/2] mlxsw: Reduce memory footprint of mlxsw driver

Message ID cover.1719321422.git.petrm@nvidia.com (mailing list archive)
Headers show
Series mlxsw: Reduce memory footprint of mlxsw driver | expand

Message

Petr Machata June 25, 2024, 1:47 p.m. UTC
Amit Cohen writes:

A previous patch-set used page pool to allocate buffers, to simplify the
change, we first used one continuous buffer, which was allocated with
order > 0. This set improves page pool usage to allocate the exact number
of pages which are required for packet.

This change requires using fragmented SKB, till now all the buffer was in
the linear part. Note that 'skb->truesize' is decreased for small packets.

This set significantly reduces memory consumption of mlxsw driver. The
footprint is reduced by 26%.

Patch set overview:
Patch #1 calculates number of scatter/gather entries and stores the value
Patch #2 converts the driver to use fragmented buffers

Amit Cohen (2):
  mlxsw: pci: Store number of scatter/gather entries for maximum packet
    size
  mlxsw: pci: Use fragmented buffers

 drivers/net/ethernet/mellanox/mlxsw/pci.c | 183 ++++++++++++++++++----
 1 file changed, 149 insertions(+), 34 deletions(-)

Comments

patchwork-bot+netdevbpf@kernel.org June 26, 2024, 3 p.m. UTC | #1
Hello:

This series was applied to netdev/net-next.git (main)
by Jakub Kicinski <kuba@kernel.org>:

On Tue, 25 Jun 2024 15:47:33 +0200 you wrote:
> Amit Cohen writes:
> 
> A previous patch-set used page pool to allocate buffers, to simplify the
> change, we first used one continuous buffer, which was allocated with
> order > 0. This set improves page pool usage to allocate the exact number
> of pages which are required for packet.
> 
> [...]

Here is the summary with links:
  - [net-next,1/2] mlxsw: pci: Store number of scatter/gather entries for maximum packet size
    https://git.kernel.org/netdev/net-next/c/8f8cea8f3ddb
  - [net-next,2/2] mlxsw: pci: Use fragmented buffers
    https://git.kernel.org/netdev/net-next/c/36437f469d7e

You are awesome, thank you!