diff mbox series

net: ipv4: Cache pmtu for all packet paths if multipath enabled

Message ID 20241029152206.303004-1-deliran@verdict.gg (mailing list archive)
State Superseded
Delegated to: Netdev Maintainers
Headers show
Series net: ipv4: Cache pmtu for all packet paths if multipath enabled | expand

Checks

Context Check Description
netdev/series_format warning Single patches do not need cover letters; Target tree name not specified in the subject
netdev/tree_selection success Guessed tree name to be net-next
netdev/ynl success Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit fail Errors and warnings before: 5 this patch: 12
netdev/build_tools success No tools touched, skip
netdev/cc_maintainers warning 4 maintainers not CCed: horms@kernel.org kuba@kernel.org pabeni@redhat.com edumazet@google.com
netdev/build_clang fail Errors and warnings before: 3 this patch: 11
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn fail Errors and warnings before: 4 this patch: 10
netdev/checkpatch warning CHECK: Alignment should match open parenthesis WARNING: line length of 81 exceeds 80 columns WARNING: line length of 84 exceeds 80 columns
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Vladimir Vdovin Oct. 29, 2024, 3:21 p.m. UTC
Check number of paths by fib_info_num_path(),
and update_or_create_fnhe() for every path.
Problem is that pmtu is cached only for the oif
that has received icmp message "need to frag",
other oifs will still try to use "default" iface mtu.

An example topology showing the problem:

                    |  host1
                +---------+
                |  dummy0 | 10.179.20.18/32  mtu9000
                +---------+
        +-----------+----------------+
    +---------+                     +---------+
    | ens17f0 |  10.179.2.141/31    | ens17f1 |  10.179.2.13/31
    +---------+                     +---------+
        |    (all here have mtu 9000)    |
    +------+                         +------+
    | ro1  |  10.179.2.140/31        | ro2  |  10.179.2.12/31
    +------+                         +------+
        |                                |
---------+------------+-------------------+------
                        |
                    +-----+
                    | ro3 | 10.10.10.10  mtu1500
                    +-----+
                        |
    ========================================
                some networks
    ========================================
                        |
                    +-----+
                    | eth0| 10.10.30.30  mtu9000
                    +-----+
                        |  host2

host1 have enabled multipath and
sysctl net.ipv4.fib_multipath_hash_policy = 1:

default proto static src 10.179.20.18
        nexthop via 10.179.2.12 dev ens17f1 weight 1
        nexthop via 10.179.2.140 dev ens17f0 weight 1

When host1 tries to do pmtud from 10.179.20.18/32 to host2,
host1 receives at ens17f1 iface an icmp packet from ro3 that ro3 mtu=1500.
And host1 caches it in nexthop exceptions cache.

Problem is that it is cached only for the iface that has received icmp,
and there is no way that ro3 will send icmp msg to host1 via another path.

Host1 now have this routes to host2:

ip r g 10.10.30.30 sport 30000 dport 443
10.10.30.30 via 10.179.2.12 dev ens17f1 src 10.179.20.18 uid 0
    cache expires 521sec mtu 1500

ip r g 10.10.30.30 sport 30033 dport 443
10.10.30.30 via 10.179.2.140 dev ens17f0 src 10.179.20.18 uid 0
    cache

So when host1 tries again to reach host2 with mtu>1500,
if packet flow is lucky enough to be hashed with oif=ens17f1 its ok,
if oif=ens17f0 it blackholes and still gets icmp msgs from ro3 to ens17f1,
until lucky day when ro3 will send it through another flow to ens17f0.

Signed-off-by: Vladimir Vdovin <deliran@verdict.gg>
---
 net/ipv4/route.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)


base-commit: 66600fac7a984dea4ae095411f644770b2561ede

Comments

David Ahern Oct. 29, 2024, 11:22 p.m. UTC | #1
On 10/29/24 9:21 AM, Vladimir Vdovin wrote:
> Check number of paths by fib_info_num_path(),
> and update_or_create_fnhe() for every path.
> Problem is that pmtu is cached only for the oif
> that has received icmp message "need to frag",
> other oifs will still try to use "default" iface mtu.
> 
> An example topology showing the problem:
> 
>                     |  host1
>                 +---------+
>                 |  dummy0 | 10.179.20.18/32  mtu9000
>                 +---------+
>         +-----------+----------------+
>     +---------+                     +---------+
>     | ens17f0 |  10.179.2.141/31    | ens17f1 |  10.179.2.13/31
>     +---------+                     +---------+
>         |    (all here have mtu 9000)    |
>     +------+                         +------+
>     | ro1  |  10.179.2.140/31        | ro2  |  10.179.2.12/31
>     +------+                         +------+
>         |                                |
> ---------+------------+-------------------+------
>                         |
>                     +-----+
>                     | ro3 | 10.10.10.10  mtu1500
>                     +-----+
>                         |
>     ========================================
>                 some networks
>     ========================================
>                         |
>                     +-----+
>                     | eth0| 10.10.30.30  mtu9000
>                     +-----+
>                         |  host2
> 
> host1 have enabled multipath and
> sysctl net.ipv4.fib_multipath_hash_policy = 1:
> 
> default proto static src 10.179.20.18
>         nexthop via 10.179.2.12 dev ens17f1 weight 1
>         nexthop via 10.179.2.140 dev ens17f0 weight 1
> 
> When host1 tries to do pmtud from 10.179.20.18/32 to host2,
> host1 receives at ens17f1 iface an icmp packet from ro3 that ro3 mtu=1500.
> And host1 caches it in nexthop exceptions cache.
> 
> Problem is that it is cached only for the iface that has received icmp,
> and there is no way that ro3 will send icmp msg to host1 via another path.
> 
> Host1 now have this routes to host2:
> 
> ip r g 10.10.30.30 sport 30000 dport 443
> 10.10.30.30 via 10.179.2.12 dev ens17f1 src 10.179.20.18 uid 0
>     cache expires 521sec mtu 1500
> 
> ip r g 10.10.30.30 sport 30033 dport 443
> 10.10.30.30 via 10.179.2.140 dev ens17f0 src 10.179.20.18 uid 0
>     cache
> 

well known problem, and years ago I meant to send a similar patch.

Can you add a test case under selftests; you will see many pmtu,
redirect and multipath tests.

> So when host1 tries again to reach host2 with mtu>1500,
> if packet flow is lucky enough to be hashed with oif=ens17f1 its ok,
> if oif=ens17f0 it blackholes and still gets icmp msgs from ro3 to ens17f1,
> until lucky day when ro3 will send it through another flow to ens17f0.
> 
> Signed-off-by: Vladimir Vdovin <deliran@verdict.gg>
> ---
>  net/ipv4/route.c | 13 +++++++++++++
>  1 file changed, 13 insertions(+)
> 
> diff --git a/net/ipv4/route.c b/net/ipv4/route.c
> index 723ac9181558..8eac6e361388 100644
> --- a/net/ipv4/route.c
> +++ b/net/ipv4/route.c
> @@ -1027,10 +1027,23 @@ static void __ip_rt_update_pmtu(struct rtable *rt, struct flowi4 *fl4, u32 mtu)
>  		struct fib_nh_common *nhc;
>  
>  		fib_select_path(net, &res, fl4, NULL);
> +#ifdef CONFIG_IP_ROUTE_MULTIPATH
> +		if (fib_info_num_path(res.fi) > 1) {
> +			int nhsel;
> +
> +			for (nhsel = 0; nhsel < fib_info_num_path(fi); nhsel++) {
> +				nhc = fib_info_nhc(res.fi, nhsel);
> +				update_or_create_fnhe(nhc, fl4->daddr, 0, mtu, lock,
> +					jiffies + net->ipv4.ip_rt_mtu_expires);
> +			}
> +			goto rcu_unlock;
> +		}
> +#endif /* CONFIG_IP_ROUTE_MULTIPATH */
>  		nhc = FIB_RES_NHC(res);
>  		update_or_create_fnhe(nhc, fl4->daddr, 0, mtu, lock,
>  				      jiffies + net->ipv4.ip_rt_mtu_expires);
>  	}
> +rcu_unlock:

compiler error when CONFIG_IP_ROUTE_MULTIPATH is not set.

>  	rcu_read_unlock();
>  }
>  
> 
> base-commit: 66600fac7a984dea4ae095411f644770b2561ede
Ido Schimmel Oct. 30, 2024, 5:11 p.m. UTC | #2
On Tue, Oct 29, 2024 at 05:22:23PM -0600, David Ahern wrote:
> On 10/29/24 9:21 AM, Vladimir Vdovin wrote:
> > Check number of paths by fib_info_num_path(),
> > and update_or_create_fnhe() for every path.
> > Problem is that pmtu is cached only for the oif
> > that has received icmp message "need to frag",
> > other oifs will still try to use "default" iface mtu.
> > 
> > An example topology showing the problem:
> > 
> >                     |  host1
> >                 +---------+
> >                 |  dummy0 | 10.179.20.18/32  mtu9000
> >                 +---------+
> >         +-----------+----------------+
> >     +---------+                     +---------+
> >     | ens17f0 |  10.179.2.141/31    | ens17f1 |  10.179.2.13/31
> >     +---------+                     +---------+
> >         |    (all here have mtu 9000)    |
> >     +------+                         +------+
> >     | ro1  |  10.179.2.140/31        | ro2  |  10.179.2.12/31
> >     +------+                         +------+
> >         |                                |
> > ---------+------------+-------------------+------
> >                         |
> >                     +-----+
> >                     | ro3 | 10.10.10.10  mtu1500
> >                     +-----+
> >                         |
> >     ========================================
> >                 some networks
> >     ========================================
> >                         |
> >                     +-----+
> >                     | eth0| 10.10.30.30  mtu9000
> >                     +-----+
> >                         |  host2
> > 
> > host1 have enabled multipath and
> > sysctl net.ipv4.fib_multipath_hash_policy = 1:
> > 
> > default proto static src 10.179.20.18
> >         nexthop via 10.179.2.12 dev ens17f1 weight 1
> >         nexthop via 10.179.2.140 dev ens17f0 weight 1
> > 
> > When host1 tries to do pmtud from 10.179.20.18/32 to host2,
> > host1 receives at ens17f1 iface an icmp packet from ro3 that ro3 mtu=1500.
> > And host1 caches it in nexthop exceptions cache.
> > 
> > Problem is that it is cached only for the iface that has received icmp,
> > and there is no way that ro3 will send icmp msg to host1 via another path.
> > 
> > Host1 now have this routes to host2:
> > 
> > ip r g 10.10.30.30 sport 30000 dport 443
> > 10.10.30.30 via 10.179.2.12 dev ens17f1 src 10.179.20.18 uid 0
> >     cache expires 521sec mtu 1500
> > 
> > ip r g 10.10.30.30 sport 30033 dport 443
> > 10.10.30.30 via 10.179.2.140 dev ens17f0 src 10.179.20.18 uid 0
> >     cache
> > 
> 
> well known problem, and years ago I meant to send a similar patch.

Doesn't IPv6 suffer from a similar problem?

> 
> Can you add a test case under selftests; you will see many pmtu,
> redirect and multipath tests.
> 
> > So when host1 tries again to reach host2 with mtu>1500,
> > if packet flow is lucky enough to be hashed with oif=ens17f1 its ok,
> > if oif=ens17f0 it blackholes and still gets icmp msgs from ro3 to ens17f1,
> > until lucky day when ro3 will send it through another flow to ens17f0.
> > 
> > Signed-off-by: Vladimir Vdovin <deliran@verdict.gg>

Thanks for the detailed commit message
Vladimir Vdovin Nov. 2, 2024, 4:20 p.m. UTC | #3
On Wed Oct 30, 2024 at 8:11 PM MSK, Ido Schimmel wrote:
> On Tue, Oct 29, 2024 at 05:22:23PM -0600, David Ahern wrote:
> > On 10/29/24 9:21 AM, Vladimir Vdovin wrote:
> > > Check number of paths by fib_info_num_path(),
> > > and update_or_create_fnhe() for every path.
> > > Problem is that pmtu is cached only for the oif
> > > that has received icmp message "need to frag",
> > > other oifs will still try to use "default" iface mtu.
> > > 
> > > An example topology showing the problem:
> > > 
> > >                     |  host1
> > >                 +---------+
> > >                 |  dummy0 | 10.179.20.18/32  mtu9000
> > >                 +---------+
> > >         +-----------+----------------+
> > >     +---------+                     +---------+
> > >     | ens17f0 |  10.179.2.141/31    | ens17f1 |  10.179.2.13/31
> > >     +---------+                     +---------+
> > >         |    (all here have mtu 9000)    |
> > >     +------+                         +------+
> > >     | ro1  |  10.179.2.140/31        | ro2  |  10.179.2.12/31
> > >     +------+                         +------+
> > >         |                                |
> > > ---------+------------+-------------------+------
> > >                         |
> > >                     +-----+
> > >                     | ro3 | 10.10.10.10  mtu1500
> > >                     +-----+
> > >                         |
> > >     ========================================
> > >                 some networks
> > >     ========================================
> > >                         |
> > >                     +-----+
> > >                     | eth0| 10.10.30.30  mtu9000
> > >                     +-----+
> > >                         |  host2
> > > 
> > > host1 have enabled multipath and
> > > sysctl net.ipv4.fib_multipath_hash_policy = 1:
> > > 
> > > default proto static src 10.179.20.18
> > >         nexthop via 10.179.2.12 dev ens17f1 weight 1
> > >         nexthop via 10.179.2.140 dev ens17f0 weight 1
> > > 
> > > When host1 tries to do pmtud from 10.179.20.18/32 to host2,
> > > host1 receives at ens17f1 iface an icmp packet from ro3 that ro3 mtu=1500.
> > > And host1 caches it in nexthop exceptions cache.
> > > 
> > > Problem is that it is cached only for the iface that has received icmp,
> > > and there is no way that ro3 will send icmp msg to host1 via another path.
> > > 
> > > Host1 now have this routes to host2:
> > > 
> > > ip r g 10.10.30.30 sport 30000 dport 443
> > > 10.10.30.30 via 10.179.2.12 dev ens17f1 src 10.179.20.18 uid 0
> > >     cache expires 521sec mtu 1500
> > > 
> > > ip r g 10.10.30.30 sport 30033 dport 443
> > > 10.10.30.30 via 10.179.2.140 dev ens17f0 src 10.179.20.18 uid 0
> > >     cache
> > > 
> > 
> > well known problem, and years ago I meant to send a similar patch.
>
> Doesn't IPv6 suffer from a similar problem?

I am not very familiar with ipv6,
but I tried to reproduce same problem with my tests with same topology.

ip netns exec ns_a-AHtoRb ip -6 r g fc00:1001::2:2 sport 30003 dport 443
fc00:1001::2:2 via fc00:2::2 dev veth_A-R2 src fc00:1000::1:1 metric 1024 expires 495sec mtu 1500 pref medium

ip netns exec ns_a-AHtoRb ip -6 r g fc00:1001::2:2 sport 30013 dport 443
fc00:1001::2:2 via fc00:1::2 dev veth_A-R1 src fc00:1000::1:1 metric 1024 expires 484sec mtu 1500 pref medium

It seems that there are no problems with ipv6. We have nhce entries for both paths.

>
> > 
> > Can you add a test case under selftests; you will see many pmtu,
> > redirect and multipath tests.
> > 
> > > So when host1 tries again to reach host2 with mtu>1500,
> > > if packet flow is lucky enough to be hashed with oif=ens17f1 its ok,
> > > if oif=ens17f0 it blackholes and still gets icmp msgs from ro3 to ens17f1,
> > > until lucky day when ro3 will send it through another flow to ens17f0.
> > > 
> > > Signed-off-by: Vladimir Vdovin <deliran@verdict.gg>
>
> Thanks for the detailed commit message
David Ahern Nov. 5, 2024, 3:52 a.m. UTC | #4
On 11/2/24 10:20 AM, Vladimir Vdovin wrote:
>>
>> Doesn't IPv6 suffer from a similar problem?

I believe the answer is yes, but do not have time to find a reproducer
right now.

> 
> I am not very familiar with ipv6,
> but I tried to reproduce same problem with my tests with same topology.
> 
> ip netns exec ns_a-AHtoRb ip -6 r g fc00:1001::2:2 sport 30003 dport 443
> fc00:1001::2:2 via fc00:2::2 dev veth_A-R2 src fc00:1000::1:1 metric 1024 expires 495sec mtu 1500 pref medium
> 
> ip netns exec ns_a-AHtoRb ip -6 r g fc00:1001::2:2 sport 30013 dport 443
> fc00:1001::2:2 via fc00:1::2 dev veth_A-R1 src fc00:1000::1:1 metric 1024 expires 484sec mtu 1500 pref medium
> 
> It seems that there are no problems with ipv6. We have nhce entries for both paths.

Does rt6_cache_allowed_for_pmtu return true or false for this test?
Vladimir Vdovin Nov. 6, 2024, 5:20 p.m. UTC | #5
On Tue Nov 5, 2024 at 6:52 AM MSK, David Ahern wrote:
> On 11/2/24 10:20 AM, Vladimir Vdovin wrote:
> >>
> >> Doesn't IPv6 suffer from a similar problem?
>
> I believe the answer is yes, but do not have time to find a reproducer
> right now.
>
> > 
> > I am not very familiar with ipv6,
> > but I tried to reproduce same problem with my tests with same topology.
> > 
> > ip netns exec ns_a-AHtoRb ip -6 r g fc00:1001::2:2 sport 30003 dport 443
> > fc00:1001::2:2 via fc00:2::2 dev veth_A-R2 src fc00:1000::1:1 metric 1024 expires 495sec mtu 1500 pref medium
> > 
> > ip netns exec ns_a-AHtoRb ip -6 r g fc00:1001::2:2 sport 30013 dport 443
> > fc00:1001::2:2 via fc00:1::2 dev veth_A-R1 src fc00:1000::1:1 metric 1024 expires 484sec mtu 1500 pref medium
> > 
> > It seems that there are no problems with ipv6. We have nhce entries for both paths.
>
> Does rt6_cache_allowed_for_pmtu return true or false for this test?
It returns true.
David Ahern Nov. 6, 2024, 6:57 p.m. UTC | #6
On 11/6/24 10:20 AM, Vladimir Vdovin wrote:
> On Tue Nov 5, 2024 at 6:52 AM MSK, David Ahern wrote:
>> On 11/2/24 10:20 AM, Vladimir Vdovin wrote:
>>>>
>>>> Doesn't IPv6 suffer from a similar problem?
>>
>> I believe the answer is yes, but do not have time to find a reproducer
>> right now.
>>
>>>
>>> I am not very familiar with ipv6,
>>> but I tried to reproduce same problem with my tests with same topology.
>>>
>>> ip netns exec ns_a-AHtoRb ip -6 r g fc00:1001::2:2 sport 30003 dport 443
>>> fc00:1001::2:2 via fc00:2::2 dev veth_A-R2 src fc00:1000::1:1 metric 1024 expires 495sec mtu 1500 pref medium
>>>
>>> ip netns exec ns_a-AHtoRb ip -6 r g fc00:1001::2:2 sport 30013 dport 443
>>> fc00:1001::2:2 via fc00:1::2 dev veth_A-R1 src fc00:1000::1:1 metric 1024 expires 484sec mtu 1500 pref medium

you should dump the cache to see the full exception list.

>>>
>>> It seems that there are no problems with ipv6. We have nhce entries for both paths.
>>
>> Does rt6_cache_allowed_for_pmtu return true or false for this test?
> It returns true.
> 
> 

Looking at the code, it is creating a single exception - not one per
path. I am fine with deferring the ipv6 patch until someone with time
and interest can work on it.
diff mbox series

Patch

diff --git a/net/ipv4/route.c b/net/ipv4/route.c
index 723ac9181558..8eac6e361388 100644
--- a/net/ipv4/route.c
+++ b/net/ipv4/route.c
@@ -1027,10 +1027,23 @@  static void __ip_rt_update_pmtu(struct rtable *rt, struct flowi4 *fl4, u32 mtu)
 		struct fib_nh_common *nhc;
 
 		fib_select_path(net, &res, fl4, NULL);
+#ifdef CONFIG_IP_ROUTE_MULTIPATH
+		if (fib_info_num_path(res.fi) > 1) {
+			int nhsel;
+
+			for (nhsel = 0; nhsel < fib_info_num_path(fi); nhsel++) {
+				nhc = fib_info_nhc(res.fi, nhsel);
+				update_or_create_fnhe(nhc, fl4->daddr, 0, mtu, lock,
+					jiffies + net->ipv4.ip_rt_mtu_expires);
+			}
+			goto rcu_unlock;
+		}
+#endif /* CONFIG_IP_ROUTE_MULTIPATH */
 		nhc = FIB_RES_NHC(res);
 		update_or_create_fnhe(nhc, fl4->daddr, 0, mtu, lock,
 				      jiffies + net->ipv4.ip_rt_mtu_expires);
 	}
+rcu_unlock:
 	rcu_read_unlock();
 }