diff mbox series

[net-next] tcp: Make GRO completion function inline

Message ID 20230611140756.1203607-1-parav@nvidia.com (mailing list archive)
State RFC
Delegated to: Netdev Maintainers
Headers show
Series [net-next] tcp: Make GRO completion function inline | expand

Checks

Context Check Description
netdev/series_format success Single patches do not need cover letters
netdev/tree_selection success Clearly marked for net-next, async
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 990 this patch: 990
netdev/cc_maintainers success CCed 6 of 6 maintainers
netdev/build_clang success Errors and warnings before: 127 this patch: 127
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 997 this patch: 997
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 55 lines checked
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Parav Pandit June 11, 2023, 2:07 p.m. UTC
At 100G link speed, with 1500 MTU, at 8.2 mpps, if device does GRO for
64K message size, currently it results in ~190k calls to
tcp_gro_complete() in data path.

Inline this small routine to avoid above number of function calls.

Suggested-by: David Ahern <dsahern@kernel.org>
Signed-off-by: Parav Pandit <parav@nvidia.com>

---
This patch is untested as I do not have the any of the 3 hw devices
calling this routine.

qede, bnxt and bnx2x maintainers,

Can you please verify it with your devices if it reduces cpu
utilization marginally or it stays same or has some side effects?

---
 include/net/tcp.h      | 19 ++++++++++++++++++-
 net/ipv4/tcp_offload.c | 18 ------------------
 2 files changed, 18 insertions(+), 19 deletions(-)

Comments

Michael Chan July 1, 2023, 11:09 p.m. UTC | #1
On Sun, Jun 11, 2023 at 7:08 AM Parav Pandit <parav@nvidia.com> wrote:
>
> At 100G link speed, with 1500 MTU, at 8.2 mpps, if device does GRO for
> 64K message size, currently it results in ~190k calls to
> tcp_gro_complete() in data path.
>
> Inline this small routine to avoid above number of function calls.
>
> Suggested-by: David Ahern <dsahern@kernel.org>
> Signed-off-by: Parav Pandit <parav@nvidia.com>
>
> ---
> This patch is untested as I do not have the any of the 3 hw devices
> calling this routine.
>
> qede, bnxt and bnx2x maintainers,
>
> Can you please verify it with your devices if it reduces cpu
> utilization marginally or it stays same or has some side effects?
>
> ---

Sorry for the delay.  It works fine on bnxt NICs running hardware GRO.
No noticeable changes in throughput or CPU utilization running simple
netperf.  Thanks.

Tested-by: Michael Chan <michael.chan@broadcom.com>
Reviewed-by: Michael Chan <michael.chan@broadcom.com>
Alexander Lobakin July 3, 2023, 4:51 p.m. UTC | #2
From: Michael Chan <michael.chan@broadcom.com>
Date: Sat, 1 Jul 2023 16:09:53 -0700

> On Sun, Jun 11, 2023 at 7:08 AM Parav Pandit <parav@nvidia.com> wrote:
>>
>> At 100G link speed, with 1500 MTU, at 8.2 mpps, if device does GRO for
>> 64K message size, currently it results in ~190k calls to
>> tcp_gro_complete() in data path.
>>
>> Inline this small routine to avoid above number of function calls.
>>
>> Suggested-by: David Ahern <dsahern@kernel.org>
>> Signed-off-by: Parav Pandit <parav@nvidia.com>
>>
>> ---
>> This patch is untested as I do not have the any of the 3 hw devices
>> calling this routine.
>>
>> qede, bnxt and bnx2x maintainers,
>>
>> Can you please verify it with your devices if it reduces cpu
>> utilization marginally or it stays same or has some side effects?
>>
>> ---
> 
> Sorry for the delay.  It works fine on bnxt NICs running hardware GRO.
> No noticeable changes in throughput or CPU utilization running simple
> netperf.  Thanks.
> 
> Tested-by: Michael Chan <michael.chan@broadcom.com>
> Reviewed-by: Michael Chan <michael.chan@broadcom.com>
Why is this needed then if it gives nothing? :D

Thanks,
Olek
Michael Chan July 3, 2023, 5 p.m. UTC | #3
On Mon, Jul 3, 2023 at 9:52 AM Alexander Lobakin
<aleksander.lobakin@intel.com> wrote:
>
> From: Michael Chan <michael.chan@broadcom.com>
> Date: Sat, 1 Jul 2023 16:09:53 -0700
>
> > On Sun, Jun 11, 2023 at 7:08 AM Parav Pandit <parav@nvidia.com> wrote:
> >>
> >> At 100G link speed, with 1500 MTU, at 8.2 mpps, if device does GRO for
> >> 64K message size, currently it results in ~190k calls to
> >> tcp_gro_complete() in data path.
> >>
> >> Inline this small routine to avoid above number of function calls.
> >>
> >> Suggested-by: David Ahern <dsahern@kernel.org>
> >> Signed-off-by: Parav Pandit <parav@nvidia.com>
> >>
> >> ---
> >> This patch is untested as I do not have the any of the 3 hw devices
> >> calling this routine.
> >>
> >> qede, bnxt and bnx2x maintainers,
> >>
> >> Can you please verify it with your devices if it reduces cpu
> >> utilization marginally or it stays same or has some side effects?
> >>
> >> ---
> >
> > Sorry for the delay.  It works fine on bnxt NICs running hardware GRO.
> > No noticeable changes in throughput or CPU utilization running simple
> > netperf.  Thanks.
> >
> > Tested-by: Michael Chan <michael.chan@broadcom.com>
> > Reviewed-by: Michael Chan <michael.chan@broadcom.com>
> Why is this needed then if it gives nothing? :D
>

I only tested at 50G link speed and about 64K aggregation size.  At
higher speed and smaller aggregation size, perhaps it will show
slightly better performance or CPU utilization.
David Ahern July 3, 2023, 5:29 p.m. UTC | #4
On 7/3/23 10:51 AM, Alexander Lobakin wrote:
> Why is this needed then if it gives nothing? :D

It is a question I asked. There are a fair number of trivial sk_buff
functions called in the datapath (e.g., skb_add_rx_frag is another).
Function calls are not free, so inlining them should *collectively*
provide measurable performance bumps as line rates and packet rates
increase.
diff mbox series

Patch

diff --git a/include/net/tcp.h b/include/net/tcp.h
index 49611af31bb7..e6e0a7125618 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -40,6 +40,7 @@ 
 #include <net/inet_ecn.h>
 #include <net/dst.h>
 #include <net/mptcp.h>
+#include <net/gro.h>
 
 #include <linux/seq_file.h>
 #include <linux/memcontrol.h>
@@ -2043,7 +2044,23 @@  INDIRECT_CALLABLE_DECLARE(int tcp4_gro_complete(struct sk_buff *skb, int thoff))
 INDIRECT_CALLABLE_DECLARE(struct sk_buff *tcp4_gro_receive(struct list_head *head, struct sk_buff *skb));
 INDIRECT_CALLABLE_DECLARE(int tcp6_gro_complete(struct sk_buff *skb, int thoff));
 INDIRECT_CALLABLE_DECLARE(struct sk_buff *tcp6_gro_receive(struct list_head *head, struct sk_buff *skb));
-void tcp_gro_complete(struct sk_buff *skb);
+
+static inline void tcp_gro_complete(struct sk_buff *skb)
+{
+	struct tcphdr *th = tcp_hdr(skb);
+
+	skb->csum_start = (unsigned char *)th - skb->head;
+	skb->csum_offset = offsetof(struct tcphdr, check);
+	skb->ip_summed = CHECKSUM_PARTIAL;
+
+	skb_shinfo(skb)->gso_segs = NAPI_GRO_CB(skb)->count;
+
+	if (th->cwr)
+		skb_shinfo(skb)->gso_type |= SKB_GSO_TCP_ECN;
+
+	if (skb->encapsulation)
+		skb->inner_transport_header = skb->transport_header;
+}
 
 void __tcp_v4_send_check(struct sk_buff *skb, __be32 saddr, __be32 daddr);
 
diff --git a/net/ipv4/tcp_offload.c b/net/ipv4/tcp_offload.c
index 8311c38267b5..5628d6007d43 100644
--- a/net/ipv4/tcp_offload.c
+++ b/net/ipv4/tcp_offload.c
@@ -296,24 +296,6 @@  struct sk_buff *tcp_gro_receive(struct list_head *head, struct sk_buff *skb)
 	return pp;
 }
 
-void tcp_gro_complete(struct sk_buff *skb)
-{
-	struct tcphdr *th = tcp_hdr(skb);
-
-	skb->csum_start = (unsigned char *)th - skb->head;
-	skb->csum_offset = offsetof(struct tcphdr, check);
-	skb->ip_summed = CHECKSUM_PARTIAL;
-
-	skb_shinfo(skb)->gso_segs = NAPI_GRO_CB(skb)->count;
-
-	if (th->cwr)
-		skb_shinfo(skb)->gso_type |= SKB_GSO_TCP_ECN;
-
-	if (skb->encapsulation)
-		skb->inner_transport_header = skb->transport_header;
-}
-EXPORT_SYMBOL(tcp_gro_complete);
-
 INDIRECT_CALLABLE_SCOPE
 struct sk_buff *tcp4_gro_receive(struct list_head *head, struct sk_buff *skb)
 {