diff mbox series

[net-next] net: mana: Increase the DEF_RX_BUFFERS_PER_QUEUE to 1024

Message ID 1726376184-14874-1-git-send-email-shradhagupta@linux.microsoft.com (mailing list archive)
State Superseded
Headers show
Series [net-next] net: mana: Increase the DEF_RX_BUFFERS_PER_QUEUE to 1024 | expand

Commit Message

Shradha Gupta Sept. 15, 2024, 4:56 a.m. UTC
Through some experiments, we found out that increasing the default
RX buffers count from 512 to 1024, gives slightly better throughput
and significantly reduces the no_wqe_rx errs on the receiver side.
Along with these, other parameters like cpu usage, retrans seg etc
also show some improvement with 1024 value.

Following are some snippets from the experiments

ntttcp tests with 512 Rx buffers
---------------------------------------
connections|  throughput|  no_wqe errs|
---------------------------------------
1          |  40.93Gbps | 123,211     |
16         | 180.15Gbps | 190,120
128        | 180.20Gbps | 173,508     |
256        | 180.27Gbps | 189,884     |

ntttcp tests with 1024 Rx buffers
---------------------------------------
connections|  throughput|  no_wqe errs|
---------------------------------------
1          |  44.22Gbps | 19,864      |
16         | 180.19Gbps | 4,430       |
128        | 180.21Gbps | 2,560       |
256        | 180.29Gbps | 1,529       |

So, increasing the default RX buffers per queue count to 1024

Signed-off-by: Shradha Gupta <shradhagupta@linux.microsoft.com>
Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com>
---
 include/net/mana/mana.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Simon Horman Sept. 15, 2024, 6:08 p.m. UTC | #1
On Sat, Sep 14, 2024 at 09:56:24PM -0700, Shradha Gupta wrote:
> Through some experiments, we found out that increasing the default
> RX buffers count from 512 to 1024, gives slightly better throughput
> and significantly reduces the no_wqe_rx errs on the receiver side.
> Along with these, other parameters like cpu usage, retrans seg etc
> also show some improvement with 1024 value.
> 
> Following are some snippets from the experiments
> 
> ntttcp tests with 512 Rx buffers
> ---------------------------------------
> connections|  throughput|  no_wqe errs|
> ---------------------------------------
> 1          |  40.93Gbps | 123,211     |
> 16         | 180.15Gbps | 190,120
> 128        | 180.20Gbps | 173,508     |
> 256        | 180.27Gbps | 189,884     |
> 
> ntttcp tests with 1024 Rx buffers
> ---------------------------------------
> connections|  throughput|  no_wqe errs|
> ---------------------------------------
> 1          |  44.22Gbps | 19,864      |
> 16         | 180.19Gbps | 4,430       |
> 128        | 180.21Gbps | 2,560       |
> 256        | 180.29Gbps | 1,529       |
> 
> So, increasing the default RX buffers per queue count to 1024
> 
> Signed-off-by: Shradha Gupta <shradhagupta@linux.microsoft.com>
> Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com>

Hi Shradha,

net-next is currently closed other than for bug fixes.
Please consider reposting once it re-opens, after v6.12-rc1
has been released.
Shradha Gupta Sept. 20, 2024, 8:39 a.m. UTC | #2
On Sun, Sep 15, 2024 at 07:08:35PM +0100, Simon Horman wrote:
> On Sat, Sep 14, 2024 at 09:56:24PM -0700, Shradha Gupta wrote:
> > Through some experiments, we found out that increasing the default
> > RX buffers count from 512 to 1024, gives slightly better throughput
> > and significantly reduces the no_wqe_rx errs on the receiver side.
> > Along with these, other parameters like cpu usage, retrans seg etc
> > also show some improvement with 1024 value.
> > 
> > Following are some snippets from the experiments
> > 
> > ntttcp tests with 512 Rx buffers
> > ---------------------------------------
> > connections|  throughput|  no_wqe errs|
> > ---------------------------------------
> > 1          |  40.93Gbps | 123,211     |
> > 16         | 180.15Gbps | 190,120
> > 128        | 180.20Gbps | 173,508     |
> > 256        | 180.27Gbps | 189,884     |
> > 
> > ntttcp tests with 1024 Rx buffers
> > ---------------------------------------
> > connections|  throughput|  no_wqe errs|
> > ---------------------------------------
> > 1          |  44.22Gbps | 19,864      |
> > 16         | 180.19Gbps | 4,430       |
> > 128        | 180.21Gbps | 2,560       |
> > 256        | 180.29Gbps | 1,529       |
> > 
> > So, increasing the default RX buffers per queue count to 1024
> > 
> > Signed-off-by: Shradha Gupta <shradhagupta@linux.microsoft.com>
> > Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com>
> 
> Hi Shradha,
> 
> net-next is currently closed other than for bug fixes.
> Please consider reposting once it re-opens, after v6.12-rc1
> has been released.
Noted, thanks Simon
> 
> -- 
> pw-bot: defer
diff mbox series

Patch

diff --git a/include/net/mana/mana.h b/include/net/mana/mana.h
index f2a5200d8a0f..9b0faa24b758 100644
--- a/include/net/mana/mana.h
+++ b/include/net/mana/mana.h
@@ -43,7 +43,7 @@  enum TRI_STATE {
  * size beyond this value gets rejected by __alloc_page() call.
  */
 #define MAX_RX_BUFFERS_PER_QUEUE 8192
-#define DEF_RX_BUFFERS_PER_QUEUE 512
+#define DEF_RX_BUFFERS_PER_QUEUE 1024
 #define MIN_RX_BUFFERS_PER_QUEUE 128
 
 /* This max value for TX buffers is derived as the maximum allocatable