diff mbox series

[net-next] net: mana: Increase the DEF_RX_BUFFERS_PER_QUEUE to 1024

Message ID 1726376184-14874-1-git-send-email-shradhagupta@linux.microsoft.com (mailing list archive)
State Deferred
Delegated to: Netdev Maintainers
Headers show
Series [net-next] net: mana: Increase the DEF_RX_BUFFERS_PER_QUEUE to 1024 | expand

Checks

Context Check Description
netdev/series_format success Single patches do not need cover letters
netdev/tree_selection success Clearly marked for net-next
netdev/ynl success Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 16 this patch: 16
netdev/build_tools success Errors and warnings before: 0 this patch: 0
netdev/cc_maintainers warning 1 maintainers not CCed: sharmaajay@microsoft.com
netdev/build_clang success Errors and warnings before: 16 this patch: 16
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 16 this patch: 16
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 8 lines checked
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0
netdev/contest fail net-next-2024-09-15--09-00 (tests: 764)

Commit Message

Shradha Gupta Sept. 15, 2024, 4:56 a.m. UTC
Through some experiments, we found out that increasing the default
RX buffers count from 512 to 1024, gives slightly better throughput
and significantly reduces the no_wqe_rx errs on the receiver side.
Along with these, other parameters like cpu usage, retrans seg etc
also show some improvement with 1024 value.

Following are some snippets from the experiments

ntttcp tests with 512 Rx buffers
---------------------------------------
connections|  throughput|  no_wqe errs|
---------------------------------------
1          |  40.93Gbps | 123,211     |
16         | 180.15Gbps | 190,120
128        | 180.20Gbps | 173,508     |
256        | 180.27Gbps | 189,884     |

ntttcp tests with 1024 Rx buffers
---------------------------------------
connections|  throughput|  no_wqe errs|
---------------------------------------
1          |  44.22Gbps | 19,864      |
16         | 180.19Gbps | 4,430       |
128        | 180.21Gbps | 2,560       |
256        | 180.29Gbps | 1,529       |

So, increasing the default RX buffers per queue count to 1024

Signed-off-by: Shradha Gupta <shradhagupta@linux.microsoft.com>
Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com>
---
 include/net/mana/mana.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Simon Horman Sept. 15, 2024, 6:08 p.m. UTC | #1
On Sat, Sep 14, 2024 at 09:56:24PM -0700, Shradha Gupta wrote:
> Through some experiments, we found out that increasing the default
> RX buffers count from 512 to 1024, gives slightly better throughput
> and significantly reduces the no_wqe_rx errs on the receiver side.
> Along with these, other parameters like cpu usage, retrans seg etc
> also show some improvement with 1024 value.
> 
> Following are some snippets from the experiments
> 
> ntttcp tests with 512 Rx buffers
> ---------------------------------------
> connections|  throughput|  no_wqe errs|
> ---------------------------------------
> 1          |  40.93Gbps | 123,211     |
> 16         | 180.15Gbps | 190,120
> 128        | 180.20Gbps | 173,508     |
> 256        | 180.27Gbps | 189,884     |
> 
> ntttcp tests with 1024 Rx buffers
> ---------------------------------------
> connections|  throughput|  no_wqe errs|
> ---------------------------------------
> 1          |  44.22Gbps | 19,864      |
> 16         | 180.19Gbps | 4,430       |
> 128        | 180.21Gbps | 2,560       |
> 256        | 180.29Gbps | 1,529       |
> 
> So, increasing the default RX buffers per queue count to 1024
> 
> Signed-off-by: Shradha Gupta <shradhagupta@linux.microsoft.com>
> Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com>

Hi Shradha,

net-next is currently closed other than for bug fixes.
Please consider reposting once it re-opens, after v6.12-rc1
has been released.
Shradha Gupta Sept. 20, 2024, 8:39 a.m. UTC | #2
On Sun, Sep 15, 2024 at 07:08:35PM +0100, Simon Horman wrote:
> On Sat, Sep 14, 2024 at 09:56:24PM -0700, Shradha Gupta wrote:
> > Through some experiments, we found out that increasing the default
> > RX buffers count from 512 to 1024, gives slightly better throughput
> > and significantly reduces the no_wqe_rx errs on the receiver side.
> > Along with these, other parameters like cpu usage, retrans seg etc
> > also show some improvement with 1024 value.
> > 
> > Following are some snippets from the experiments
> > 
> > ntttcp tests with 512 Rx buffers
> > ---------------------------------------
> > connections|  throughput|  no_wqe errs|
> > ---------------------------------------
> > 1          |  40.93Gbps | 123,211     |
> > 16         | 180.15Gbps | 190,120
> > 128        | 180.20Gbps | 173,508     |
> > 256        | 180.27Gbps | 189,884     |
> > 
> > ntttcp tests with 1024 Rx buffers
> > ---------------------------------------
> > connections|  throughput|  no_wqe errs|
> > ---------------------------------------
> > 1          |  44.22Gbps | 19,864      |
> > 16         | 180.19Gbps | 4,430       |
> > 128        | 180.21Gbps | 2,560       |
> > 256        | 180.29Gbps | 1,529       |
> > 
> > So, increasing the default RX buffers per queue count to 1024
> > 
> > Signed-off-by: Shradha Gupta <shradhagupta@linux.microsoft.com>
> > Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com>
> 
> Hi Shradha,
> 
> net-next is currently closed other than for bug fixes.
> Please consider reposting once it re-opens, after v6.12-rc1
> has been released.
Noted, thanks Simon
> 
> -- 
> pw-bot: defer
diff mbox series

Patch

diff --git a/include/net/mana/mana.h b/include/net/mana/mana.h
index f2a5200d8a0f..9b0faa24b758 100644
--- a/include/net/mana/mana.h
+++ b/include/net/mana/mana.h
@@ -43,7 +43,7 @@  enum TRI_STATE {
  * size beyond this value gets rejected by __alloc_page() call.
  */
 #define MAX_RX_BUFFERS_PER_QUEUE 8192
-#define DEF_RX_BUFFERS_PER_QUEUE 512
+#define DEF_RX_BUFFERS_PER_QUEUE 1024
 #define MIN_RX_BUFFERS_PER_QUEUE 128
 
 /* This max value for TX buffers is derived as the maximum allocatable