Message ID | 20201111193737.1793-1-pablo@netfilter.org (mailing list archive) |
---|---|
Headers | show |
Series | netfilter: flowtable bridge and vlan enhancements | expand |
On Wed, 11 Nov 2020 20:37:28 +0100 Pablo Neira Ayuso wrote: > The following patchset augments the Netfilter flowtable fastpath [1] to > support for network topologies that combine IP forwarding, bridge and > VLAN devices. > > A typical scenario that can benefit from this infrastructure is composed > of several VMs connected to bridge ports where the bridge master device > 'br0' has an IP address. A DHCP server is also assumed to be running to > provide connectivity to the VMs. The VMs reach the Internet through > 'br0' as default gateway, which makes the packet enter the IP forwarding > path. Then, netfilter is used to NAT the packets before they leave > through the wan device. > > Something like this: > > fast path > .------------------------. > / \ > | IP forwarding | > | / \ . > | br0 eth0 > . / \ > -- veth1 veth2 > . > . > . > eth0 > ab:cd:ef:ab:cd:ef > VM > > The idea is to accelerate forwarding by building a fast path that takes > packets from the ingress path of the bridge port and place them in the > egress path of the wan device (and vice versa). Hence, skipping the > classic bridge and IP stack paths. The problem that immediately comes to mind is that if there is any dynamic forwarding state the cache you're creating would need to be flushed when FDB changes. Are you expecting users would plug into the flowtable devices where they know things are fairly static?
On Fri, Nov 13, 2020 at 05:55:56PM -0800, Jakub Kicinski wrote: > On Wed, 11 Nov 2020 20:37:28 +0100 Pablo Neira Ayuso wrote: > > The following patchset augments the Netfilter flowtable fastpath [1] to > > support for network topologies that combine IP forwarding, bridge and > > VLAN devices. > > > > A typical scenario that can benefit from this infrastructure is composed > > of several VMs connected to bridge ports where the bridge master device > > 'br0' has an IP address. A DHCP server is also assumed to be running to > > provide connectivity to the VMs. The VMs reach the Internet through > > 'br0' as default gateway, which makes the packet enter the IP forwarding > > path. Then, netfilter is used to NAT the packets before they leave > > through the wan device. > > > > Something like this: > > > > fast path > > .------------------------. > > / \ > > | IP forwarding | > > | / \ . > > | br0 eth0 > > . / \ > > -- veth1 veth2 > > . > > . > > . > > eth0 > > ab:cd:ef:ab:cd:ef > > VM > > > > The idea is to accelerate forwarding by building a fast path that takes > > packets from the ingress path of the bridge port and place them in the > > egress path of the wan device (and vice versa). Hence, skipping the > > classic bridge and IP stack paths. > > The problem that immediately comes to mind is that if there is any > dynamic forwarding state the cache you're creating would need to be > flushed when FDB changes. Are you expecting users would plug into the > flowtable devices where they know things are fairly static? If any of the flowtable device goes down / removed, the entries are removed from the flowtable. This means packets of existing flows are pushed up back to classic bridge / forwarding path to re-evaluate the fast path. For each new flow, the fast path that is selected freshly, so they use the up-to-date FDB to select a new bridge port. Existing flows still follow the old path. The same happens with FIB currently. It should be possible to explore purging entries in the flowtable that are stale due to changes in the topology (either in FDB or FIB). What scenario do you have specifically in mind? Something like VM migrates from one bridge port to another? Thank you.
On Sat, Nov 14, 2020 at 12:59, Pablo Neira Ayuso <pablo@netfilter.org> wrote: > If any of the flowtable device goes down / removed, the entries are > removed from the flowtable. This means packets of existing flows are > pushed up back to classic bridge / forwarding path to re-evaluate the > fast path. > > For each new flow, the fast path that is selected freshly, so they use > the up-to-date FDB to select a new bridge port. > > Existing flows still follow the old path. The same happens with FIB > currently. > > It should be possible to explore purging entries in the flowtable that > are stale due to changes in the topology (either in FDB or FIB). > > What scenario do you have specifically in mind? Something like VM > migrates from one bridge port to another? This should work in the case when the bridge ports are normal NICs or switchdev ports, right? In that case, relying on link state is brittle as you can easily have a switch or a media converter between the bridge and the end-station: br0 br0 / \ / \ eth0 eth1 eth0 eth1 / \ => / \ [sw0] [sw1] [sw0] [sw1] / \ / \ A A In a scenario like this, A has clearly moved. But neither eth0 nor eth1 has seen any changes in link state. This particular example is a bit contrived. But this is essentially what happens in redundant topologies when reconfigurations occur (e.g. STP). These protocols will typically signal reconfigurations to all bridges though, so as long as the affected flows are flushed at the same time as the FDB it should work. Interesting stuff!
On Sat, 14 Nov 2020 15:00:03 +0100 Tobias Waldekranz wrote: > On Sat, Nov 14, 2020 at 12:59, Pablo Neira Ayuso <pablo@netfilter.org> wrote: > > If any of the flowtable device goes down / removed, the entries are > > removed from the flowtable. This means packets of existing flows are > > pushed up back to classic bridge / forwarding path to re-evaluate the > > fast path. > > > > For each new flow, the fast path that is selected freshly, so they use > > the up-to-date FDB to select a new bridge port. > > > > Existing flows still follow the old path. The same happens with FIB > > currently. > > > > It should be possible to explore purging entries in the flowtable that > > are stale due to changes in the topology (either in FDB or FIB). > > > > What scenario do you have specifically in mind? Something like VM > > migrates from one bridge port to another? Indeed, 2 VMs A and B, talking to each other, A is _outside_ the system (reachable via eth0), B is inside (veth1). When A moves inside and gets its veth. Neither B's veth1 not eth0 will change state, so cache wouldn't get flushed, right? > This should work in the case when the bridge ports are normal NICs or > switchdev ports, right? > > In that case, relying on link state is brittle as you can easily have a > switch or a media converter between the bridge and the end-station: > > br0 br0 > / \ / \ > eth0 eth1 eth0 eth1 > / \ => / \ > [sw0] [sw1] [sw0] [sw1] > / \ / \ > A A > > In a scenario like this, A has clearly moved. But neither eth0 nor eth1 > has seen any changes in link state. > > This particular example is a bit contrived. But this is essentially what > happens in redundant topologies when reconfigurations occur (e.g. STP). > > These protocols will typically signal reconfigurations to all bridges > though, so as long as the affected flows are flushed at the same time as > the FDB it should work. > > Interesting stuff! Agreed, could be interesting for all NAT/conntrack setups, not just VMs.
On Sat, Nov 14, 2020 at 09:03:47AM -0800, Jakub Kicinski wrote: > On Sat, 14 Nov 2020 15:00:03 +0100 Tobias Waldekranz wrote: > > On Sat, Nov 14, 2020 at 12:59, Pablo Neira Ayuso <pablo@netfilter.org> wrote: > > > If any of the flowtable device goes down / removed, the entries are > > > removed from the flowtable. This means packets of existing flows are > > > pushed up back to classic bridge / forwarding path to re-evaluate the > > > fast path. > > > > > > For each new flow, the fast path that is selected freshly, so they use > > > the up-to-date FDB to select a new bridge port. > > > > > > Existing flows still follow the old path. The same happens with FIB > > > currently. > > > > > > It should be possible to explore purging entries in the flowtable that > > > are stale due to changes in the topology (either in FDB or FIB). > > > > > > What scenario do you have specifically in mind? Something like VM > > > migrates from one bridge port to another? > > Indeed, 2 VMs A and B, talking to each other, A is _outside_ the > system (reachable via eth0), B is inside (veth1). When A moves inside > and gets its veth. Neither B's veth1 not eth0 will change state, so > cache wouldn't get flushed, right? The flow tuple includes the input interface as part of the hash key, so packets will not match the existing entries in the flowtable after the topology update. The stale flow entries are removed after 30 seconds if no matching packets are seen. New flow entries will be created for the new topology, a few packets have to go through the classic forwarding path so the new flow entries that represent the flow in the new topology are created.
On Mon, 16 Nov 2020 23:18:15 +0100 Pablo Neira Ayuso wrote: > On Sat, Nov 14, 2020 at 09:03:47AM -0800, Jakub Kicinski wrote: > > On Sat, 14 Nov 2020 15:00:03 +0100 Tobias Waldekranz wrote: > > > On Sat, Nov 14, 2020 at 12:59, Pablo Neira Ayuso <pablo@netfilter.org> wrote: > > > > If any of the flowtable device goes down / removed, the entries are > > > > removed from the flowtable. This means packets of existing flows are > > > > pushed up back to classic bridge / forwarding path to re-evaluate the > > > > fast path. > > > > > > > > For each new flow, the fast path that is selected freshly, so they use > > > > the up-to-date FDB to select a new bridge port. > > > > > > > > Existing flows still follow the old path. The same happens with FIB > > > > currently. > > > > > > > > It should be possible to explore purging entries in the flowtable that > > > > are stale due to changes in the topology (either in FDB or FIB). > > > > > > > > What scenario do you have specifically in mind? Something like VM > > > > migrates from one bridge port to another? > > > > Indeed, 2 VMs A and B, talking to each other, A is _outside_ the > > system (reachable via eth0), B is inside (veth1). When A moves inside > > and gets its veth. Neither B's veth1 not eth0 will change state, so > > cache wouldn't get flushed, right? > > The flow tuple includes the input interface as part of the hash key, > so packets will not match the existing entries in the flowtable after > the topology update. To be clear - the input interface for B -> A traffic remains B. So if B was talking to A before A moved it will keep hitting the cached entry. Are you saying A -> B traffic won't match so it will update the cache, since conntrack flows are bi-directional? > The stale flow entries are removed after 30 seconds > if no matching packets are seen. New flow entries will be created for > the new topology, a few packets have to go through the classic > forwarding path so the new flow entries that represent the flow in the > new topology are created.
On Sat, Nov 14, 2020 at 03:00:03PM +0100, Tobias Waldekranz wrote: > On Sat, Nov 14, 2020 at 12:59, Pablo Neira Ayuso <pablo@netfilter.org> wrote: > > If any of the flowtable device goes down / removed, the entries are > > removed from the flowtable. This means packets of existing flows are > > pushed up back to classic bridge / forwarding path to re-evaluate the > > fast path. > > > > For each new flow, the fast path that is selected freshly, so they use > > the up-to-date FDB to select a new bridge port. > > > > Existing flows still follow the old path. The same happens with FIB > > currently. > > > > It should be possible to explore purging entries in the flowtable that > > are stale due to changes in the topology (either in FDB or FIB). > > > > What scenario do you have specifically in mind? Something like VM > > migrates from one bridge port to another? > > This should work in the case when the bridge ports are normal NICs or > switchdev ports, right? Yes. > In that case, relying on link state is brittle as you can easily have a > switch or a media converter between the bridge and the end-station: > > br0 br0 > / \ / \ > eth0 eth1 eth0 eth1 > / \ => / \ > [sw0] [sw1] [sw0] [sw1] > / \ / \ > A A > > In a scenario like this, A has clearly moved. But neither eth0 nor eth1 > has seen any changes in link state. > > This particular example is a bit contrived. But this is essentially what > happens in redundant topologies when reconfigurations occur (e.g. STP). > > These protocols will typically signal reconfigurations to all bridges > though, so as long as the affected flows are flushed at the same time as > the FDB it should work. Yes, watching the FDB should allow to clean up stale flows immediately.
On Mon, Nov 16, 2020 at 02:28:44PM -0800, Jakub Kicinski wrote: > On Mon, 16 Nov 2020 23:18:15 +0100 Pablo Neira Ayuso wrote: > > On Sat, Nov 14, 2020 at 09:03:47AM -0800, Jakub Kicinski wrote: > > > On Sat, 14 Nov 2020 15:00:03 +0100 Tobias Waldekranz wrote: > > > > On Sat, Nov 14, 2020 at 12:59, Pablo Neira Ayuso <pablo@netfilter.org> wrote: > > > > > If any of the flowtable device goes down / removed, the entries are > > > > > removed from the flowtable. This means packets of existing flows are > > > > > pushed up back to classic bridge / forwarding path to re-evaluate the > > > > > fast path. > > > > > > > > > > For each new flow, the fast path that is selected freshly, so they use > > > > > the up-to-date FDB to select a new bridge port. > > > > > > > > > > Existing flows still follow the old path. The same happens with FIB > > > > > currently. > > > > > > > > > > It should be possible to explore purging entries in the flowtable that > > > > > are stale due to changes in the topology (either in FDB or FIB). > > > > > > > > > > What scenario do you have specifically in mind? Something like VM > > > > > migrates from one bridge port to another? > > > > > > Indeed, 2 VMs A and B, talking to each other, A is _outside_ the > > > system (reachable via eth0), B is inside (veth1). When A moves inside > > > and gets its veth. Neither B's veth1 not eth0 will change state, so > > > cache wouldn't get flushed, right? > > > > The flow tuple includes the input interface as part of the hash key, > > so packets will not match the existing entries in the flowtable after > > the topology update. > > To be clear - the input interface for B -> A traffic remains B. > So if B was talking to A before A moved it will keep hitting > the cached entry. Yes, Traffic for B -> A still hits the cached entry. > Are you saying A -> B traffic won't match so it will update the cache, > since conntrack flows are bi-directional? Yes, Traffic for A -> B won't match the flowtable entry, this will update the cache.
On Mon, 16 Nov 2020 23:36:15 +0100 Pablo Neira Ayuso wrote: > > Are you saying A -> B traffic won't match so it will update the cache, > > since conntrack flows are bi-directional? > > Yes, Traffic for A -> B won't match the flowtable entry, this will > update the cache. That's assuming there will be A -> B traffic without B sending a request which reaches A, first.
On Mon, Nov 16, 2020 at 02:45:21PM -0800, Jakub Kicinski wrote: > On Mon, 16 Nov 2020 23:36:15 +0100 Pablo Neira Ayuso wrote: > > > Are you saying A -> B traffic won't match so it will update the cache, > > > since conntrack flows are bi-directional? > > > > Yes, Traffic for A -> B won't match the flowtable entry, this will > > update the cache. > > That's assuming there will be A -> B traffic without B sending a > request which reaches A, first. B might send packets to A but this will not get anywhere. Assuming TCP, this will trigger retransmissions so B -> A will kick in to refresh the entry. Is this scenario that you describe a showstopper? Thank you.
On Mon, Nov 16, 2020 at 11:56:58PM +0100, Pablo Neira Ayuso wrote: > On Mon, Nov 16, 2020 at 02:45:21PM -0800, Jakub Kicinski wrote: > > On Mon, 16 Nov 2020 23:36:15 +0100 Pablo Neira Ayuso wrote: > > > > Are you saying A -> B traffic won't match so it will update the cache, > > > > since conntrack flows are bi-directional? > > > > > > Yes, Traffic for A -> B won't match the flowtable entry, this will > > > update the cache. > > > > That's assuming there will be A -> B traffic without B sending a > > request which reaches A, first. > > B might send packets to A but this will not get anywhere. Assuming > TCP, this will trigger retransmissions so B -> A will kick in to > refresh the entry. > > Is this scenario that you describe a showstopper? I have been discussing the topology update by tracking fdb updates with the bridge maintainer, I'll be exploring extensions to the existing fdb_notify() infrastructure to deal with this scenario you describe. On my side this topology update scenario is not a priority to be supported in this patchset, but it's feasible to support it later on.
On Sat, 21 Nov 2020 13:31:38 +0100 Pablo Neira Ayuso wrote: > On Mon, Nov 16, 2020 at 11:56:58PM +0100, Pablo Neira Ayuso wrote: > > On Mon, Nov 16, 2020 at 02:45:21PM -0800, Jakub Kicinski wrote: > > > On Mon, 16 Nov 2020 23:36:15 +0100 Pablo Neira Ayuso wrote: > > > > > Are you saying A -> B traffic won't match so it will update the cache, > > > > > since conntrack flows are bi-directional? > > > > > > > > Yes, Traffic for A -> B won't match the flowtable entry, this will > > > > update the cache. > > > > > > That's assuming there will be A -> B traffic without B sending a > > > request which reaches A, first. > > > > B might send packets to A but this will not get anywhere. Assuming > > TCP, this will trigger retransmissions so B -> A will kick in to > > refresh the entry. > > > > Is this scenario that you describe a showstopper? Sorry I got distracted. > I have been discussing the topology update by tracking fdb updates > with the bridge maintainer, I'll be exploring extensions to the > existing fdb_notify() infrastructure to deal with this scenario you > describe. On my side this topology update scenario is not a priority > to be supported in this patchset, but it's feasible to support it > later on. My concern is that invalidation is _the_ hard part of creating caches. And I feel like merging this as is would be setting our standards pretty low. Please gather some review tags from senior netdev developers. I don't feel confident enough to apply this as 100% my own decision.
On Sat, Nov 21, 2020 at 10:15:51AM -0800, Jakub Kicinski wrote: > On Sat, 21 Nov 2020 13:31:38 +0100 Pablo Neira Ayuso wrote: > > On Mon, Nov 16, 2020 at 11:56:58PM +0100, Pablo Neira Ayuso wrote: > > > On Mon, Nov 16, 2020 at 02:45:21PM -0800, Jakub Kicinski wrote: > > > > On Mon, 16 Nov 2020 23:36:15 +0100 Pablo Neira Ayuso wrote: [...] > > I have been discussing the topology update by tracking fdb updates > > with the bridge maintainer, I'll be exploring extensions to the > > existing fdb_notify() infrastructure to deal with this scenario you > > describe. On my side this topology update scenario is not a priority > > to be supported in this patchset, but it's feasible to support it > > later on. > > My concern is that invalidation is _the_ hard part of creating caches. > And I feel like merging this as is would be setting our standards pretty > low. Interesting, let's summarize a bit to make sure we're on the same page: - This "cache" is optional, you enable it on demand through ruleset. - This "cache" is configurable, you can specify through ruleset policy what policies get into the cache and _when_ they are placed in the cache. - This is not affecting any existing default configuration, neither Linux networking not even classic path Netfilter configurations, it's a rather new thing. - This is showing performance improvement of ~50% with a very simple testbed. With pktgen, back few years ago I was reaching x2.5 performance boost in software in a pktgen testbed. - This is adding minimal changes to netdev_ops, just a single callback. For the live VM migration you describe, connections might time out, but there are many use-cases where this is still valid, some of them has been described already here. > Please gather some review tags from senior netdev developers. I don't > feel confident enough to apply this as 100% my own decision. Fair enough. This requirement for very specific Netfilter infrastructure which does not affect any other Networking subsystem sounds new to me. What senior developers specifically you would like I should poke to get an acknowledgement on this to get this accepted of your preference? Thank you.
On Sat, 21 Nov 2020 19:56:21 +0100 Pablo Neira Ayuso wrote: > > Please gather some review tags from senior netdev developers. I don't > > feel confident enough to apply this as 100% my own decision. > > Fair enough. > > This requirement for very specific Netfilter infrastructure which does > not affect any other Networking subsystem sounds new to me. You mean me asking for reviews from other senior folks when I don't feel good about some code? I've asked others the same thing in the past, e.g. Paolo for his RPS thing. > What senior developers specifically you would like I should poke to > get an acknowledgement on this to get this accepted of your > preference? I don't want to make a list. Maybe netconf attendees are a safe bet?
Hi Jakub, On Sat, Nov 21, 2020 at 11:23:48AM -0800, Jakub Kicinski wrote: > On Sat, 21 Nov 2020 19:56:21 +0100 Pablo Neira Ayuso wrote: > > > Please gather some review tags from senior netdev developers. I don't > > > feel confident enough to apply this as 100% my own decision. > > > > Fair enough. > > > > This requirement for very specific Netfilter infrastructure which does > > not affect any other Networking subsystem sounds new to me. > > You mean me asking for reviews from other senior folks when I don't > feel good about some code? I've asked others the same thing in the > past, e.g. Paolo for his RPS thing. No, I'm perfectly fine with peer review. Note that I am sending this to net-next as a patchset (not as a PR) _only_ because this is adding a new .ndo_fill_forward_path to netdev_ops. That's the only thing that is relevant to the Netdev core infrastructure IMO, and this new .ndo that is private, not exposed to userspace. Let's have a look at the diffstats again: include/linux/netdevice.h | 35 +++ include/net/netfilter/nf_flow_table.h | 43 +++- net/8021q/vlan_dev.c | 15 ++ net/bridge/br_device.c | 27 +++ net/core/dev.c | 46 ++++ net/netfilter/nf_flow_table_core.c | 51 +++-- net/netfilter/nf_flow_table_ip.c | 200 ++++++++++++++---- net/netfilter/nft_flow_offload.c | 159 +++++++++++++- .../selftests/netfilter/nft_flowtable.sh | 82 +++++++ 9 files changed, 598 insertions(+), 60 deletions(-) So this is adding _minimal_ changes to the NetDev infrastructure. Most of the code is an extension to the flowtable Netfilter infrastructure. And the flowtable is a cache since its conception. I am adding the .ndo indirection to avoid the dependencies with Netfilter modules, e.g. Netfilter could use direct reference to bridge function, but that would pull in bridge modules. > > What senior developers specifically you would like I should poke to > > get an acknowledgement on this to get this accepted of your > > preference? > > I don't want to make a list. Maybe netconf attendees are a safe bet? I have no idea who to ask to, traditionally it's the NetDev maintainer (AFAIK it's only you at this stage) that have the last word on something to get this merged. I consider all developers that have reviewed this patchset to be senior developers.