diff mbox series

[RFC,4/7] net: openvswitch: ovs_vport_receive reduce stack usage

Message ID 20230927001308.749910-5-npiggin@gmail.com (mailing list archive)
State RFC
Delegated to: Netdev Maintainers
Headers show
Series net: openvswitch: Reduce stack usage | expand

Checks

Context Check Description
netdev/tree_selection success Guessing tree name failed - patch did not apply

Commit Message

Nicholas Piggin Sept. 27, 2023, 12:13 a.m. UTC
Dynamically allocating the sw_flow_key reduces stack usage of
ovs_vport_receive from 544 bytes to 64 bytes at the cost of
another GFP_ATOMIC allocation in the receive path.

XXX: is this a problem with memory reserves if ovs is in a
memory reclaim path, or since we have a skb allocated, is it
okay to use some GFP_ATOMIC reserves?

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 net/openvswitch/vport.c | 14 +++++++++++---
 1 file changed, 11 insertions(+), 3 deletions(-)

Comments

Aaron Conole Sept. 28, 2023, 3:26 p.m. UTC | #1
Nicholas Piggin <npiggin@gmail.com> writes:

> Dynamically allocating the sw_flow_key reduces stack usage of
> ovs_vport_receive from 544 bytes to 64 bytes at the cost of
> another GFP_ATOMIC allocation in the receive path.
>
> XXX: is this a problem with memory reserves if ovs is in a
> memory reclaim path, or since we have a skb allocated, is it
> okay to use some GFP_ATOMIC reserves?
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---

This represents a fairly large performance hit.  Just my own quick
testing on a system using two netns, iperf3, and simple forwarding rules
shows between 2.5% and 4% performance reduction on x86-64.  Note that it
is a simple case, and doesn't involve a more involved scenario like
multiple bridges, tunnels, and internal ports.  I suspect such cases
will see even bigger hit.

I don't know the impact of the other changes, but just an FYI that the
performance impact of this change is extremely noticeable on x86
platform.

----
ip netns add left
ip netns add right

ip link add eth0 type veth peer name l0
ip link set eth0 netns left
ip netns exec left ip addr add 172.31.110.1/24 dev eth0
ip netns exec left ip link set eth0 up
ip link set l0 up

ip link add eth0 type veth peer name r0
ip link set eth0 netns right
ip netns exec right ip addr add 172.31.110.2/24 dev eth0
ip netns exec right ip link set eth0 up
ip link set r0 up

python3 ovs-dpctl.py add-dp br0
python3 ovs-dpctl.py add-if br0 l0
python3 ovs-dpctl.py add-if br0 r0

python3 ovs-dpctl.py add-flow \
  br0 'in_port(1),eth(),eth_type(0x806),arp()' 2
  
python3 ovs-dpctl.py add-flow \
  br0 'in_port(2),eth(),eth_type(0x806),arp()' 1

python3 ovs-dpctl.py add-flow \
  br0 'in_port(1),eth(),eth_type(0x800),ipv4()' 2

python3 ovs-dpctl.py add-flow \
  br0 'in_port(2),eth(),eth_type(0x800),ipv4()' 1

----

ex results without this patch:
[root@wsfd-netdev60 ~]# ip netns exec right ./git/iperf/src/iperf3 -c 172.31.110.1
...
[  5]   0.00-10.00  sec  46.7 GBytes  40.2 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  46.7 GBytes  40.2 Gbits/sec                  receiver


ex results with this patch:
[root@wsfd-netdev60 ~]# ip netns exec right ./git/iperf/src/iperf3 -c 172.31.110.1
...
[  5]   0.00-10.00  sec  44.9 GBytes  38.6 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  44.9 GBytes  38.6 Gbits/sec                  receiver

I did testing with udp at various bandwidths and this tcp testing.

>  net/openvswitch/vport.c | 14 +++++++++++---
>  1 file changed, 11 insertions(+), 3 deletions(-)
>
> diff --git a/net/openvswitch/vport.c b/net/openvswitch/vport.c
> index 972ae01a70f7..ddba3e00832b 100644
> --- a/net/openvswitch/vport.c
> +++ b/net/openvswitch/vport.c
> @@ -494,9 +494,13 @@ u32 ovs_vport_find_upcall_portid(const struct vport *vport,
>  int ovs_vport_receive(struct vport *vport, struct sk_buff *skb,
>  		      const struct ip_tunnel_info *tun_info)
>  {
> -	struct sw_flow_key key;
> +	struct sw_flow_key *key;
>  	int error;
>  
> +	key = kmalloc(sizeof(*key), GFP_ATOMIC);
> +	if (!key)
> +		return -ENOMEM;
> +
>  	OVS_CB(skb)->input_vport = vport;
>  	OVS_CB(skb)->mru = 0;
>  	OVS_CB(skb)->cutlen = 0;
> @@ -510,12 +514,16 @@ int ovs_vport_receive(struct vport *vport, struct sk_buff *skb,
>  	}
>  
>  	/* Extract flow from 'skb' into 'key'. */
> -	error = ovs_flow_key_extract(tun_info, skb, &key);
> +	error = ovs_flow_key_extract(tun_info, skb, key);
>  	if (unlikely(error)) {
>  		kfree_skb(skb);
> +		kfree(key);
>  		return error;
>  	}
> -	ovs_dp_process_packet(skb, &key);
> +	ovs_dp_process_packet(skb, key);
> +
> +	kfree(key);
> +
>  	return 0;
>  }
Nicholas Piggin Sept. 29, 2023, 7 a.m. UTC | #2
On Fri Sep 29, 2023 at 1:26 AM AEST, Aaron Conole wrote:
> Nicholas Piggin <npiggin@gmail.com> writes:
>
> > Dynamically allocating the sw_flow_key reduces stack usage of
> > ovs_vport_receive from 544 bytes to 64 bytes at the cost of
> > another GFP_ATOMIC allocation in the receive path.
> >
> > XXX: is this a problem with memory reserves if ovs is in a
> > memory reclaim path, or since we have a skb allocated, is it
> > okay to use some GFP_ATOMIC reserves?
> >
> > Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> > ---
>
> This represents a fairly large performance hit.  Just my own quick
> testing on a system using two netns, iperf3, and simple forwarding rules
> shows between 2.5% and 4% performance reduction on x86-64.  Note that it
> is a simple case, and doesn't involve a more involved scenario like
> multiple bridges, tunnels, and internal ports.  I suspect such cases
> will see even bigger hit.
>
> I don't know the impact of the other changes, but just an FYI that the
> performance impact of this change is extremely noticeable on x86
> platform.

Thanks for the numbers. This patch is probably the biggest perf cost,
but unfortunately it's also about the biggest saving. I might have an
idea to improve it.

Thanks,
Nick
Eelco Chaudron Sept. 29, 2023, 8:38 a.m. UTC | #3
On 29 Sep 2023, at 9:00, Nicholas Piggin wrote:

> On Fri Sep 29, 2023 at 1:26 AM AEST, Aaron Conole wrote:
>> Nicholas Piggin <npiggin@gmail.com> writes:
>>
>>> Dynamically allocating the sw_flow_key reduces stack usage of
>>> ovs_vport_receive from 544 bytes to 64 bytes at the cost of
>>> another GFP_ATOMIC allocation in the receive path.
>>>
>>> XXX: is this a problem with memory reserves if ovs is in a
>>> memory reclaim path, or since we have a skb allocated, is it
>>> okay to use some GFP_ATOMIC reserves?
>>>
>>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>>> ---
>>
>> This represents a fairly large performance hit.  Just my own quick
>> testing on a system using two netns, iperf3, and simple forwarding rules
>> shows between 2.5% and 4% performance reduction on x86-64.  Note that it
>> is a simple case, and doesn't involve a more involved scenario like
>> multiple bridges, tunnels, and internal ports.  I suspect such cases
>> will see even bigger hit.
>>
>> I don't know the impact of the other changes, but just an FYI that the
>> performance impact of this change is extremely noticeable on x86
>> platform.
>
> Thanks for the numbers. This patch is probably the biggest perf cost,
> but unfortunately it's also about the biggest saving. I might have an
> idea to improve it.

Also, were you able to figure out why we do not see this problem on x86 and arm64? Is the stack usage so much larger, or is there some other root cause? Is there a simple replicator, as this might help you profile the differences between the architectures?
Nicholas Piggin Oct. 4, 2023, 7:11 a.m. UTC | #4
On Fri Sep 29, 2023 at 6:38 PM AEST, Eelco Chaudron wrote:
>
>
> On 29 Sep 2023, at 9:00, Nicholas Piggin wrote:
>
> > On Fri Sep 29, 2023 at 1:26 AM AEST, Aaron Conole wrote:
> >> Nicholas Piggin <npiggin@gmail.com> writes:
> >>
> >>> Dynamically allocating the sw_flow_key reduces stack usage of
> >>> ovs_vport_receive from 544 bytes to 64 bytes at the cost of
> >>> another GFP_ATOMIC allocation in the receive path.
> >>>
> >>> XXX: is this a problem with memory reserves if ovs is in a
> >>> memory reclaim path, or since we have a skb allocated, is it
> >>> okay to use some GFP_ATOMIC reserves?
> >>>
> >>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> >>> ---
> >>
> >> This represents a fairly large performance hit.  Just my own quick
> >> testing on a system using two netns, iperf3, and simple forwarding rules
> >> shows between 2.5% and 4% performance reduction on x86-64.  Note that it
> >> is a simple case, and doesn't involve a more involved scenario like
> >> multiple bridges, tunnels, and internal ports.  I suspect such cases
> >> will see even bigger hit.
> >>
> >> I don't know the impact of the other changes, but just an FYI that the
> >> performance impact of this change is extremely noticeable on x86
> >> platform.
> >
> > Thanks for the numbers. This patch is probably the biggest perf cost,
> > but unfortunately it's also about the biggest saving. I might have an
> > idea to improve it.
>
> Also, were you able to figure out why we do not see this problem on
> x86 and arm64? Is the stack usage so much larger, or is there some
> other root cause?

Haven't pinpointed it exactly. ppc64le interrupt entry frame is nearly
3x larger than x86-64, about 200 bytes. So there's 400 if a hard
interrupt (not seen in the backtrace) is what overflowed it. Stack
alignment I think is 32 bytes vs 16 for x86-64. And different amount of
spilling and non-volatile register use and inlining choices by the
compiler could nudge things one way or another. There is little to no
ppc64le specific data structures on the stack in any of this call chain
which should cause much more bloat though, AFAIKS.

So other archs should not be far away from overflowing 16kB I think.

> Is there a simple replicator, as this might help you
> profile the differences between the architectures?

Unfortunately not, it's some kubernetes contraption I don't know how
to reproduce myself.

Thanks,
Nick
Nicholas Piggin Oct. 4, 2023, 7:29 a.m. UTC | #5
On Fri Sep 29, 2023 at 1:26 AM AEST, Aaron Conole wrote:
> Nicholas Piggin <npiggin@gmail.com> writes:
>
> > Dynamically allocating the sw_flow_key reduces stack usage of
> > ovs_vport_receive from 544 bytes to 64 bytes at the cost of
> > another GFP_ATOMIC allocation in the receive path.
> >
> > XXX: is this a problem with memory reserves if ovs is in a
> > memory reclaim path, or since we have a skb allocated, is it
> > okay to use some GFP_ATOMIC reserves?
> >
> > Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> > ---
>
> This represents a fairly large performance hit.  Just my own quick
> testing on a system using two netns, iperf3, and simple forwarding rules
> shows between 2.5% and 4% performance reduction on x86-64.  Note that it
> is a simple case, and doesn't involve a more involved scenario like
> multiple bridges, tunnels, and internal ports.  I suspect such cases
> will see even bigger hit.
>
> I don't know the impact of the other changes, but just an FYI that the
> performance impact of this change is extremely noticeable on x86
> platform.
>
> ----
> ip netns add left
> ip netns add right
>
> ip link add eth0 type veth peer name l0
> ip link set eth0 netns left
> ip netns exec left ip addr add 172.31.110.1/24 dev eth0
> ip netns exec left ip link set eth0 up
> ip link set l0 up
>
> ip link add eth0 type veth peer name r0
> ip link set eth0 netns right
> ip netns exec right ip addr add 172.31.110.2/24 dev eth0
> ip netns exec right ip link set eth0 up
> ip link set r0 up
>
> python3 ovs-dpctl.py add-dp br0
> python3 ovs-dpctl.py add-if br0 l0
> python3 ovs-dpctl.py add-if br0 r0
>
> python3 ovs-dpctl.py add-flow \
>   br0 'in_port(1),eth(),eth_type(0x806),arp()' 2
>   
> python3 ovs-dpctl.py add-flow \
>   br0 'in_port(2),eth(),eth_type(0x806),arp()' 1
>
> python3 ovs-dpctl.py add-flow \
>   br0 'in_port(1),eth(),eth_type(0x800),ipv4()' 2
>
> python3 ovs-dpctl.py add-flow \
>   br0 'in_port(2),eth(),eth_type(0x800),ipv4()' 1
>
> ----
>
> ex results without this patch:
> [root@wsfd-netdev60 ~]# ip netns exec right ./git/iperf/src/iperf3 -c 172.31.110.1
> ...
> [  5]   0.00-10.00  sec  46.7 GBytes  40.2 Gbits/sec    0             sender
> [  5]   0.00-10.00  sec  46.7 GBytes  40.2 Gbits/sec                  receiver
>
>
> ex results with this patch:
> [root@wsfd-netdev60 ~]# ip netns exec right ./git/iperf/src/iperf3 -c 172.31.110.1
> ...
> [  5]   0.00-10.00  sec  44.9 GBytes  38.6 Gbits/sec    0             sender
> [  5]   0.00-10.00  sec  44.9 GBytes  38.6 Gbits/sec                  receiver
>
> I did testing with udp at various bandwidths and this tcp testing.

Thanks for the test case. It works perfectly in the end, but it took me
days to get there because of a random conspiracy of problems I hit :(
Sorry for the slow reply, but I was now able to test another idea for
this. Performance seems to be within the noise with the full series, but
my system is only getting ~half the rate of yours so you might see more
movement.

Instead of slab it reuses the per-cpu actions key allocator here.

https://github.com/torvalds/linux/commit/878f01f04ca858e445ff4b4c64351a25bb8399e3

Pushed the series to kvm branch of https://github.com/npiggin/linux

I can repost the series as a second RFC but will wait for thoughts on
this approach.

Thanks,
Nick
Aaron Conole Oct. 4, 2023, 3:15 p.m. UTC | #6
"Nicholas Piggin" <npiggin@gmail.com> writes:

> On Fri Sep 29, 2023 at 6:38 PM AEST, Eelco Chaudron wrote:
>>
>>
>> On 29 Sep 2023, at 9:00, Nicholas Piggin wrote:
>>
>> > On Fri Sep 29, 2023 at 1:26 AM AEST, Aaron Conole wrote:
>> >> Nicholas Piggin <npiggin@gmail.com> writes:
>> >>
>> >>> Dynamically allocating the sw_flow_key reduces stack usage of
>> >>> ovs_vport_receive from 544 bytes to 64 bytes at the cost of
>> >>> another GFP_ATOMIC allocation in the receive path.
>> >>>
>> >>> XXX: is this a problem with memory reserves if ovs is in a
>> >>> memory reclaim path, or since we have a skb allocated, is it
>> >>> okay to use some GFP_ATOMIC reserves?
>> >>>
>> >>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>> >>> ---
>> >>
>> >> This represents a fairly large performance hit.  Just my own quick
>> >> testing on a system using two netns, iperf3, and simple forwarding rules
>> >> shows between 2.5% and 4% performance reduction on x86-64.  Note that it
>> >> is a simple case, and doesn't involve a more involved scenario like
>> >> multiple bridges, tunnels, and internal ports.  I suspect such cases
>> >> will see even bigger hit.
>> >>
>> >> I don't know the impact of the other changes, but just an FYI that the
>> >> performance impact of this change is extremely noticeable on x86
>> >> platform.
>> >
>> > Thanks for the numbers. This patch is probably the biggest perf cost,
>> > but unfortunately it's also about the biggest saving. I might have an
>> > idea to improve it.
>>
>> Also, were you able to figure out why we do not see this problem on
>> x86 and arm64? Is the stack usage so much larger, or is there some
>> other root cause?
>
> Haven't pinpointed it exactly. ppc64le interrupt entry frame is nearly
> 3x larger than x86-64, about 200 bytes. So there's 400 if a hard
> interrupt (not seen in the backtrace) is what overflowed it. Stack
> alignment I think is 32 bytes vs 16 for x86-64. And different amount of
> spilling and non-volatile register use and inlining choices by the
> compiler could nudge things one way or another. There is little to no
> ppc64le specific data structures on the stack in any of this call chain
> which should cause much more bloat though, AFAIKS.
>
> So other archs should not be far away from overflowing 16kB I think.
>
>> Is there a simple replicator, as this might help you
>> profile the differences between the architectures?
>
> Unfortunately not, it's some kubernetes contraption I don't know how
> to reproduce myself.

If we can get the flow dump and configuration, we can probably make sure
to reproduce it with ovs-dpctl.py (add any missing features, etc).  I
guess it should be simple to get (ovs-vsctl show, ovs-appctl
dpctl/dump-flows) and we can try to replicate it.

> Thanks,
> Nick
Aaron Conole Oct. 4, 2023, 3:16 p.m. UTC | #7
"Nicholas Piggin" <npiggin@gmail.com> writes:

> On Fri Sep 29, 2023 at 1:26 AM AEST, Aaron Conole wrote:
>> Nicholas Piggin <npiggin@gmail.com> writes:
>>
>> > Dynamically allocating the sw_flow_key reduces stack usage of
>> > ovs_vport_receive from 544 bytes to 64 bytes at the cost of
>> > another GFP_ATOMIC allocation in the receive path.
>> >
>> > XXX: is this a problem with memory reserves if ovs is in a
>> > memory reclaim path, or since we have a skb allocated, is it
>> > okay to use some GFP_ATOMIC reserves?
>> >
>> > Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>> > ---
>>
>> This represents a fairly large performance hit.  Just my own quick
>> testing on a system using two netns, iperf3, and simple forwarding rules
>> shows between 2.5% and 4% performance reduction on x86-64.  Note that it
>> is a simple case, and doesn't involve a more involved scenario like
>> multiple bridges, tunnels, and internal ports.  I suspect such cases
>> will see even bigger hit.
>>
>> I don't know the impact of the other changes, but just an FYI that the
>> performance impact of this change is extremely noticeable on x86
>> platform.
>>
>> ----
>> ip netns add left
>> ip netns add right
>>
>> ip link add eth0 type veth peer name l0
>> ip link set eth0 netns left
>> ip netns exec left ip addr add 172.31.110.1/24 dev eth0
>> ip netns exec left ip link set eth0 up
>> ip link set l0 up
>>
>> ip link add eth0 type veth peer name r0
>> ip link set eth0 netns right
>> ip netns exec right ip addr add 172.31.110.2/24 dev eth0
>> ip netns exec right ip link set eth0 up
>> ip link set r0 up
>>
>> python3 ovs-dpctl.py add-dp br0
>> python3 ovs-dpctl.py add-if br0 l0
>> python3 ovs-dpctl.py add-if br0 r0
>>
>> python3 ovs-dpctl.py add-flow \
>>   br0 'in_port(1),eth(),eth_type(0x806),arp()' 2
>>   
>> python3 ovs-dpctl.py add-flow \
>>   br0 'in_port(2),eth(),eth_type(0x806),arp()' 1
>>
>> python3 ovs-dpctl.py add-flow \
>>   br0 'in_port(1),eth(),eth_type(0x800),ipv4()' 2
>>
>> python3 ovs-dpctl.py add-flow \
>>   br0 'in_port(2),eth(),eth_type(0x800),ipv4()' 1
>>
>> ----
>>
>> ex results without this patch:
>> [root@wsfd-netdev60 ~]# ip netns exec right ./git/iperf/src/iperf3 -c 172.31.110.1
>> ...
>> [  5]   0.00-10.00  sec  46.7 GBytes  40.2 Gbits/sec    0             sender
>> [  5]   0.00-10.00  sec  46.7 GBytes  40.2 Gbits/sec                  receiver
>>
>>
>> ex results with this patch:
>> [root@wsfd-netdev60 ~]# ip netns exec right ./git/iperf/src/iperf3 -c 172.31.110.1
>> ...
>> [  5]   0.00-10.00  sec  44.9 GBytes  38.6 Gbits/sec    0             sender
>> [  5]   0.00-10.00  sec  44.9 GBytes  38.6 Gbits/sec                  receiver
>>
>> I did testing with udp at various bandwidths and this tcp testing.
>
> Thanks for the test case. It works perfectly in the end, but it took me
> days to get there because of a random conspiracy of problems I hit :(
> Sorry for the slow reply, but I was now able to test another idea for
> this. Performance seems to be within the noise with the full series, but
> my system is only getting ~half the rate of yours so you might see more
> movement.
>
> Instead of slab it reuses the per-cpu actions key allocator here.
>
> https://github.com/torvalds/linux/commit/878f01f04ca858e445ff4b4c64351a25bb8399e3
>
> Pushed the series to kvm branch of https://github.com/npiggin/linux
>
> I can repost the series as a second RFC but will wait for thoughts on
> this approach.

Thanks - I'll take a look at it.

> Thanks,
> Nick
Nicholas Piggin Oct. 5, 2023, 2:01 a.m. UTC | #8
On Fri Sep 29, 2023 at 6:38 PM AEST, Eelco Chaudron wrote:
>
>
> On 29 Sep 2023, at 9:00, Nicholas Piggin wrote:
>
> > On Fri Sep 29, 2023 at 1:26 AM AEST, Aaron Conole wrote:
> >> Nicholas Piggin <npiggin@gmail.com> writes:
> >>
> >>> Dynamically allocating the sw_flow_key reduces stack usage of
> >>> ovs_vport_receive from 544 bytes to 64 bytes at the cost of
> >>> another GFP_ATOMIC allocation in the receive path.
> >>>
> >>> XXX: is this a problem with memory reserves if ovs is in a
> >>> memory reclaim path, or since we have a skb allocated, is it
> >>> okay to use some GFP_ATOMIC reserves?
> >>>
> >>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> >>> ---
> >>
> >> This represents a fairly large performance hit.  Just my own quick
> >> testing on a system using two netns, iperf3, and simple forwarding rules
> >> shows between 2.5% and 4% performance reduction on x86-64.  Note that it
> >> is a simple case, and doesn't involve a more involved scenario like
> >> multiple bridges, tunnels, and internal ports.  I suspect such cases
> >> will see even bigger hit.
> >>
> >> I don't know the impact of the other changes, but just an FYI that the
> >> performance impact of this change is extremely noticeable on x86
> >> platform.
> >
> > Thanks for the numbers. This patch is probably the biggest perf cost,
> > but unfortunately it's also about the biggest saving. I might have an
> > idea to improve it.
>
> Also, were you able to figure out why we do not see this problem on x86 and arm64? Is the stack usage so much larger, or is there some other root cause? Is there a simple replicator, as this might help you profile the differences between the architectures?

I found some snippets of equivalent call chain (this is for 4.18 RHEL8
kernels, but it's just to give a general idea of stack overhead
differences in C code). Frame size annotated on the right hand side:

[c0000007ffdba980] do_execute_actions     496
[c0000007ffdbab70] ovs_execute_actions    128
[c0000007ffdbabf0] ovs_dp_process_packet  208
[c0000007ffdbacc0] clone_execute          176
[c0000007ffdbad70] do_execute_actions     496
[c0000007ffdbaf60] ovs_execute_actions    128
[c0000007ffdbafe0] ovs_dp_process_packet  208
[c0000007ffdbb0b0] ovs_vport_receive      528
[c0000007ffdbb2c0] internal_dev_xmit
                                 total = 2368
[ff49b6d4065a3628] do_execute_actions     416
[ff49b6d4065a37c8] ovs_execute_actions     48
[ff49b6d4065a37f8] ovs_dp_process_packet  112
[ff49b6d4065a3868] clone_execute           64
[ff49b6d4065a38a8] do_execute_actions     416
[ff49b6d4065a3a48] ovs_execute_actions     48
[ff49b6d4065a3a78] ovs_dp_process_packet  112
[ff49b6d4065a3ae8] ovs_vport_receive      496
[ff49b6d4065a3cd8] netdev_frame_hook
                                 total = 1712

That's more significant than I thought, nearly 40% more stack usage for
ppc even with 3 frames having large local variables that can't be
avoided for either arch.

So, x86_64 could be quite safe with its 16kB stack for the same
workload, explaining why same overflow has not been seen there.

Thanks,
Nick
Aaron Conole Oct. 11, 2023, 1:34 p.m. UTC | #9
"Nicholas Piggin" <npiggin@gmail.com> writes:

> On Fri Sep 29, 2023 at 6:38 PM AEST, Eelco Chaudron wrote:
>>
>>
>> On 29 Sep 2023, at 9:00, Nicholas Piggin wrote:
>>
>> > On Fri Sep 29, 2023 at 1:26 AM AEST, Aaron Conole wrote:
>> >> Nicholas Piggin <npiggin@gmail.com> writes:
>> >>
>> >>> Dynamically allocating the sw_flow_key reduces stack usage of
>> >>> ovs_vport_receive from 544 bytes to 64 bytes at the cost of
>> >>> another GFP_ATOMIC allocation in the receive path.
>> >>>
>> >>> XXX: is this a problem with memory reserves if ovs is in a
>> >>> memory reclaim path, or since we have a skb allocated, is it
>> >>> okay to use some GFP_ATOMIC reserves?
>> >>>
>> >>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>> >>> ---
>> >>
>> >> This represents a fairly large performance hit.  Just my own quick
>> >> testing on a system using two netns, iperf3, and simple forwarding rules
>> >> shows between 2.5% and 4% performance reduction on x86-64.  Note that it
>> >> is a simple case, and doesn't involve a more involved scenario like
>> >> multiple bridges, tunnels, and internal ports.  I suspect such cases
>> >> will see even bigger hit.
>> >>
>> >> I don't know the impact of the other changes, but just an FYI that the
>> >> performance impact of this change is extremely noticeable on x86
>> >> platform.
>> >
>> > Thanks for the numbers. This patch is probably the biggest perf cost,
>> > but unfortunately it's also about the biggest saving. I might have an
>> > idea to improve it.
>>
>> Also, were you able to figure out why we do not see this problem on
>> x86 and arm64? Is the stack usage so much larger, or is there some
>> other root cause? Is there a simple replicator, as this might help
>> you profile the differences between the architectures?
>
> I found some snippets of equivalent call chain (this is for 4.18 RHEL8
> kernels, but it's just to give a general idea of stack overhead
> differences in C code). Frame size annotated on the right hand side:
>
> [c0000007ffdba980] do_execute_actions     496
> [c0000007ffdbab70] ovs_execute_actions    128
> [c0000007ffdbabf0] ovs_dp_process_packet  208
> [c0000007ffdbacc0] clone_execute          176
> [c0000007ffdbad70] do_execute_actions     496
> [c0000007ffdbaf60] ovs_execute_actions    128
> [c0000007ffdbafe0] ovs_dp_process_packet  208
> [c0000007ffdbb0b0] ovs_vport_receive      528
> [c0000007ffdbb2c0] internal_dev_xmit
>                                  total = 2368
> [ff49b6d4065a3628] do_execute_actions     416
> [ff49b6d4065a37c8] ovs_execute_actions     48
> [ff49b6d4065a37f8] ovs_dp_process_packet  112
> [ff49b6d4065a3868] clone_execute           64
> [ff49b6d4065a38a8] do_execute_actions     416
> [ff49b6d4065a3a48] ovs_execute_actions     48
> [ff49b6d4065a3a78] ovs_dp_process_packet  112
> [ff49b6d4065a3ae8] ovs_vport_receive      496
> [ff49b6d4065a3cd8] netdev_frame_hook
>                                  total = 1712
>
> That's more significant than I thought, nearly 40% more stack usage for
> ppc even with 3 frames having large local variables that can't be
> avoided for either arch.
>
> So, x86_64 could be quite safe with its 16kB stack for the same
> workload, explaining why same overflow has not been seen there.

This is interesting - is it possible that we could resolve this without
needing to change the kernel - or at least without changing how OVS
works?  Why are these so different?  Maybe there's some bloat in some of
the ppc data structures that can be addressed?  For example,
ovs_execute_actions shouldn't really be that different, but I wonder if
the way the per-cpu infra works, or the deferred action processing gets
inlined would be causing stack bloat?

> Thanks,
> Nick
Nicholas Piggin Oct. 11, 2023, 11:58 p.m. UTC | #10
On Wed Oct 11, 2023 at 11:34 PM AEST, Aaron Conole wrote:
> "Nicholas Piggin" <npiggin@gmail.com> writes:
>
> > On Fri Sep 29, 2023 at 6:38 PM AEST, Eelco Chaudron wrote:
> >>
> >>
> >> On 29 Sep 2023, at 9:00, Nicholas Piggin wrote:
> >>
> >> > On Fri Sep 29, 2023 at 1:26 AM AEST, Aaron Conole wrote:
> >> >> Nicholas Piggin <npiggin@gmail.com> writes:
> >> >>
> >> >>> Dynamically allocating the sw_flow_key reduces stack usage of
> >> >>> ovs_vport_receive from 544 bytes to 64 bytes at the cost of
> >> >>> another GFP_ATOMIC allocation in the receive path.
> >> >>>
> >> >>> XXX: is this a problem with memory reserves if ovs is in a
> >> >>> memory reclaim path, or since we have a skb allocated, is it
> >> >>> okay to use some GFP_ATOMIC reserves?
> >> >>>
> >> >>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> >> >>> ---
> >> >>
> >> >> This represents a fairly large performance hit.  Just my own quick
> >> >> testing on a system using two netns, iperf3, and simple forwarding rules
> >> >> shows between 2.5% and 4% performance reduction on x86-64.  Note that it
> >> >> is a simple case, and doesn't involve a more involved scenario like
> >> >> multiple bridges, tunnels, and internal ports.  I suspect such cases
> >> >> will see even bigger hit.
> >> >>
> >> >> I don't know the impact of the other changes, but just an FYI that the
> >> >> performance impact of this change is extremely noticeable on x86
> >> >> platform.
> >> >
> >> > Thanks for the numbers. This patch is probably the biggest perf cost,
> >> > but unfortunately it's also about the biggest saving. I might have an
> >> > idea to improve it.
> >>
> >> Also, were you able to figure out why we do not see this problem on
> >> x86 and arm64? Is the stack usage so much larger, or is there some
> >> other root cause? Is there a simple replicator, as this might help
> >> you profile the differences between the architectures?
> >
> > I found some snippets of equivalent call chain (this is for 4.18 RHEL8
> > kernels, but it's just to give a general idea of stack overhead
> > differences in C code). Frame size annotated on the right hand side:
> >
> > [c0000007ffdba980] do_execute_actions     496
> > [c0000007ffdbab70] ovs_execute_actions    128
> > [c0000007ffdbabf0] ovs_dp_process_packet  208
> > [c0000007ffdbacc0] clone_execute          176
> > [c0000007ffdbad70] do_execute_actions     496
> > [c0000007ffdbaf60] ovs_execute_actions    128
> > [c0000007ffdbafe0] ovs_dp_process_packet  208
> > [c0000007ffdbb0b0] ovs_vport_receive      528
> > [c0000007ffdbb2c0] internal_dev_xmit
> >                                  total = 2368
> > [ff49b6d4065a3628] do_execute_actions     416
> > [ff49b6d4065a37c8] ovs_execute_actions     48
> > [ff49b6d4065a37f8] ovs_dp_process_packet  112
> > [ff49b6d4065a3868] clone_execute           64
> > [ff49b6d4065a38a8] do_execute_actions     416
> > [ff49b6d4065a3a48] ovs_execute_actions     48
> > [ff49b6d4065a3a78] ovs_dp_process_packet  112
> > [ff49b6d4065a3ae8] ovs_vport_receive      496
> > [ff49b6d4065a3cd8] netdev_frame_hook
> >                                  total = 1712
> >
> > That's more significant than I thought, nearly 40% more stack usage for
> > ppc even with 3 frames having large local variables that can't be
> > avoided for either arch.
> >
> > So, x86_64 could be quite safe with its 16kB stack for the same
> > workload, explaining why same overflow has not been seen there.
>
> This is interesting - is it possible that we could resolve this without
> needing to change the kernel - or at least without changing how OVS
> works?

Not really.

To be clear I don't say ovs is the one and only problem, so it could be
resolved if stack was larger or if other things did not use so much,
etc.

Maybe other things could be changed too, but ovs uses several K of stack
that it doesn't need to, and since it is also causing recursion it needs
to be as tight as possible with its stack use.

> Why are these so different?  Maybe there's some bloat in some of
> the ppc data structures that can be addressed?  For example,
> ovs_execute_actions shouldn't really be that different, but I wonder if
> the way the per-cpu infra works, or the deferred action processing gets
> inlined would be causing stack bloat?

Most other stack usage is not due to Linux powerpc arch defining certain
types and structures to be larger (most are the same size as other
64-bit archs). Rather due to C and GCC. I have asked powerpc GCC people
about stack size and no easy option to reduce it, if it were possible to
improve in new version of GCC then we still need to deal with old.

Powerpc has a larger minimum stack frame size (32 bytes) and larger
alignment (32 bytes vs 16 IIRC). It also has more non-volatile registers
and probably uses them more which requires saving to stack. So some of
it is fundamental.

In some cases I can't really see why GCC on ppc uses so much. AFAIKS
ovs_execute_actions could be using 96 bytes, but it's possible I miss
an alignment requirement.

Thanks,
Nick
diff mbox series

Patch

diff --git a/net/openvswitch/vport.c b/net/openvswitch/vport.c
index 972ae01a70f7..ddba3e00832b 100644
--- a/net/openvswitch/vport.c
+++ b/net/openvswitch/vport.c
@@ -494,9 +494,13 @@  u32 ovs_vport_find_upcall_portid(const struct vport *vport,
 int ovs_vport_receive(struct vport *vport, struct sk_buff *skb,
 		      const struct ip_tunnel_info *tun_info)
 {
-	struct sw_flow_key key;
+	struct sw_flow_key *key;
 	int error;
 
+	key = kmalloc(sizeof(*key), GFP_ATOMIC);
+	if (!key)
+		return -ENOMEM;
+
 	OVS_CB(skb)->input_vport = vport;
 	OVS_CB(skb)->mru = 0;
 	OVS_CB(skb)->cutlen = 0;
@@ -510,12 +514,16 @@  int ovs_vport_receive(struct vport *vport, struct sk_buff *skb,
 	}
 
 	/* Extract flow from 'skb' into 'key'. */
-	error = ovs_flow_key_extract(tun_info, skb, &key);
+	error = ovs_flow_key_extract(tun_info, skb, key);
 	if (unlikely(error)) {
 		kfree_skb(skb);
+		kfree(key);
 		return error;
 	}
-	ovs_dp_process_packet(skb, &key);
+	ovs_dp_process_packet(skb, key);
+
+	kfree(key);
+
 	return 0;
 }