Message ID | 20210320223448.2452869-9-olteanv@gmail.com (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | Better support for sandwiched LAGs with bridge and DSA | expand |
Context | Check | Description |
---|---|---|
netdev/cover_letter | success | Link |
netdev/fixes_present | success | Link |
netdev/patch_count | success | Link |
netdev/tree_selection | success | Clearly marked for net-next |
netdev/subject_prefix | success | Link |
netdev/cc_maintainers | warning | 1 maintainers not CCed: bridge@lists.linux-foundation.org |
netdev/source_inline | success | Was 0 now: 0 |
netdev/verify_signedoff | success | Link |
netdev/module_param | success | Was 0 now: 0 |
netdev/build_32bit | success | Errors and warnings before: 55 this patch: 55 |
netdev/kdoc | success | Errors and warnings before: 0 this patch: 0 |
netdev/verify_fixes | success | Link |
netdev/checkpatch | warning | CHECK: Please use a blank line after function/struct/union/enum declarations |
netdev/build_allmodconfig_warn | success | Errors and warnings before: 55 this patch: 55 |
netdev/header_inline | success | Link |
On 21/03/2021 00:34, Vladimir Oltean wrote: > From: Vladimir Oltean <vladimir.oltean@nxp.com> > > I have udhcpcd in my system and this is configured to bring interfaces > up as soon as they are created. > > I create a bridge as follows: > > ip link add br0 type bridge > > As soon as I create the bridge and udhcpcd brings it up, I also have > avahi which automatically starts sending IPv6 packets to advertise some > local services, and because of that, the br0 bridge joins the following > IPv6 groups due to the code path detailed below: > > 33:33:ff:6d:c1:9c vid 0 > 33:33:00:00:00:6a vid 0 > 33:33:00:00:00:fb vid 0 > > br_dev_xmit > -> br_multicast_rcv > -> br_ip6_multicast_add_group > -> __br_multicast_add_group > -> br_multicast_host_join > -> br_mdb_notify > > This is all fine, but inside br_mdb_notify we have br_mdb_switchdev_host > hooked up, and switchdev will attempt to offload the host joined groups > to an empty list of ports. Of course nobody offloads them. > > Then when we add a port to br0: > > ip link set swp0 master br0 > > the bridge doesn't replay the host-joined MDB entries from br_add_if, > and eventually the host joined addresses expire, and a switchdev > notification for deleting it is emitted, but surprise, the original > addition was already completely missed. > > The strategy to address this problem is to replay the MDB entries (both > the port ones and the host joined ones) when the new port joins the > bridge, similar to what vxlan_fdb_replay does (in that case, its FDB can > be populated and only then attached to a bridge that you offload). > However there are 2 possibilities: the addresses can be 'pushed' by the > bridge into the port, or the port can 'pull' them from the bridge. > > Considering that in the general case, the new port can be really late to > the party, and there may have been many other switchdev ports that > already received the initial notification, we would like to avoid > delivering duplicate events to them, since they might misbehave. And > currently, the bridge calls the entire switchdev notifier chain, whereas > for replaying it should just call the notifier block of the new guy. > But the bridge doesn't know what is the new guy's notifier block, it > just knows where the switchdev notifier chain is. So for simplification, > we make this a driver-initiated pull for now, and the notifier block is > passed as an argument. > > To emulate the calling context for mdb objects (deferred and put on the > blocking notifier chain), we must iterate under RCU protection through > the bridge's mdb entries, queue them, and only call them once we're out > of the RCU read-side critical section. > > Suggested-by: Ido Schimmel <idosch@idosch.org> > Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> > --- > Changes in v3: > - Removed the implication that avahi is crap from the commit message. > - Made the br_mdb_replay shim return -EOPNOTSUPP. > > include/linux/if_bridge.h | 9 +++++ > net/bridge/br_mdb.c | 84 +++++++++++++++++++++++++++++++++++++++ > net/dsa/dsa_priv.h | 2 + > net/dsa/port.c | 6 +++ > net/dsa/slave.c | 2 +- > 5 files changed, 102 insertions(+), 1 deletion(-) > > diff --git a/include/linux/if_bridge.h b/include/linux/if_bridge.h > index ebd16495459c..f6472969bb44 100644 > --- a/include/linux/if_bridge.h > +++ b/include/linux/if_bridge.h > @@ -69,6 +69,8 @@ bool br_multicast_has_querier_anywhere(struct net_device *dev, int proto); > bool br_multicast_has_querier_adjacent(struct net_device *dev, int proto); > bool br_multicast_enabled(const struct net_device *dev); > bool br_multicast_router(const struct net_device *dev); > +int br_mdb_replay(struct net_device *br_dev, struct net_device *dev, > + struct notifier_block *nb, struct netlink_ext_ack *extack); > #else > static inline int br_multicast_list_adjacent(struct net_device *dev, > struct list_head *br_ip_list) > @@ -93,6 +95,13 @@ static inline bool br_multicast_router(const struct net_device *dev) > { > return false; > } > +static inline int br_mdb_replay(struct net_device *br_dev, > + struct net_device *dev, > + struct notifier_block *nb, > + struct netlink_ext_ack *extack) > +{ > + return -EOPNOTSUPP; > +} > #endif > > #if IS_ENABLED(CONFIG_BRIDGE) && IS_ENABLED(CONFIG_BRIDGE_VLAN_FILTERING) > diff --git a/net/bridge/br_mdb.c b/net/bridge/br_mdb.c > index 8846c5bcd075..23973186094c 100644 > --- a/net/bridge/br_mdb.c > +++ b/net/bridge/br_mdb.c > @@ -506,6 +506,90 @@ static void br_mdb_complete(struct net_device *dev, int err, void *priv) > kfree(priv); > } > > +static int br_mdb_replay_one(struct notifier_block *nb, struct net_device *dev, > + struct net_bridge_mdb_entry *mp, int obj_id, > + struct net_device *orig_dev, > + struct netlink_ext_ack *extack) > +{ > + struct switchdev_notifier_port_obj_info obj_info = { > + .info = { > + .dev = dev, > + .extack = extack, > + }, > + }; > + struct switchdev_obj_port_mdb mdb = { > + .obj = { > + .orig_dev = orig_dev, > + .id = obj_id, > + }, > + .vid = mp->addr.vid, > + }; > + int err; > + > + if (mp->addr.proto == htons(ETH_P_IP)) > + ip_eth_mc_map(mp->addr.dst.ip4, mdb.addr); > +#if IS_ENABLED(CONFIG_IPV6) > + else if (mp->addr.proto == htons(ETH_P_IPV6)) > + ipv6_eth_mc_map(&mp->addr.dst.ip6, mdb.addr); > +#endif > + else > + ether_addr_copy(mdb.addr, mp->addr.dst.mac_addr); > + > + obj_info.obj = &mdb.obj; > + > + err = nb->notifier_call(nb, SWITCHDEV_PORT_OBJ_ADD, &obj_info); > + return notifier_to_errno(err); > +} > + > +int br_mdb_replay(struct net_device *br_dev, struct net_device *dev, > + struct notifier_block *nb, struct netlink_ext_ack *extack) > +{ > + struct net_bridge_mdb_entry *mp; > + struct list_head mdb_list; If you use LIST_HEAD(mdb_list)... > + struct net_bridge *br; > + int err = 0; > + > + ASSERT_RTNL(); > + > + INIT_LIST_HEAD(&mdb_list); ... you can drop this one. > + > + if (!netif_is_bridge_master(br_dev) || !netif_is_bridge_port(dev)) > + return -EINVAL; > + > + br = netdev_priv(br_dev); > + > + if (!br_opt_get(br, BROPT_MULTICAST_ENABLED)) > + return 0; > + > + hlist_for_each_entry(mp, &br->mdb_list, mdb_node) { You cannot walk over these lists without the multicast lock or RCU. RTNL is not enough because of various timers and leave messages that can alter both the mdb_list and the port group lists. I'd prefer RCU to avoid blocking the bridge mcast. > + struct net_bridge_port_group __rcu **pp; > + struct net_bridge_port_group *p; > + > + if (mp->host_joined) { > + err = br_mdb_replay_one(nb, dev, mp, > + SWITCHDEV_OBJ_ID_HOST_MDB, > + br_dev, extack); > + if (err) > + return err; > + } > + > + for (pp = &mp->ports; (p = rtnl_dereference(*pp)) != NULL; > + pp = &p->next) { > + if (p->key.port->dev != dev) > + continue; > + > + err = br_mdb_replay_one(nb, dev, mp, > + SWITCHDEV_OBJ_ID_PORT_MDB, > + dev, extack); > + if (err) > + return err;> + } > + } > + > + return 0; > +} > +EXPORT_SYMBOL(br_mdb_replay); EXPORT_SYMBOL_GPL > + > static void br_mdb_switchdev_host_port(struct net_device *dev, > struct net_device *lower_dev, > struct net_bridge_mdb_entry *mp, > diff --git a/net/dsa/dsa_priv.h b/net/dsa/dsa_priv.h > index b8778c5d8529..b14c43cb88bb 100644 > --- a/net/dsa/dsa_priv.h > +++ b/net/dsa/dsa_priv.h > @@ -262,6 +262,8 @@ static inline bool dsa_tree_offloads_bridge_port(struct dsa_switch_tree *dst, > > /* slave.c */ > extern const struct dsa_device_ops notag_netdev_ops; > +extern struct notifier_block dsa_slave_switchdev_blocking_notifier; > + > void dsa_slave_mii_bus_init(struct dsa_switch *ds); > int dsa_slave_create(struct dsa_port *dp); > void dsa_slave_destroy(struct net_device *slave_dev); > diff --git a/net/dsa/port.c b/net/dsa/port.c > index 95e6f2861290..3e61e9e6675c 100644 > --- a/net/dsa/port.c > +++ b/net/dsa/port.c > @@ -199,6 +199,12 @@ static int dsa_port_switchdev_sync(struct dsa_port *dp, > if (err && err != -EOPNOTSUPP) > return err; > > + err = br_mdb_replay(br, brport_dev, > + &dsa_slave_switchdev_blocking_notifier, > + extack); > + if (err && err != -EOPNOTSUPP) > + return err; > + > return 0; > } > > diff --git a/net/dsa/slave.c b/net/dsa/slave.c > index 1ff48be476bb..b974d8f84a2e 100644 > --- a/net/dsa/slave.c > +++ b/net/dsa/slave.c > @@ -2396,7 +2396,7 @@ static struct notifier_block dsa_slave_switchdev_notifier = { > .notifier_call = dsa_slave_switchdev_event, > }; > > -static struct notifier_block dsa_slave_switchdev_blocking_notifier = { > +struct notifier_block dsa_slave_switchdev_blocking_notifier = { > .notifier_call = dsa_slave_switchdev_blocking_event, > }; > >
On Mon, Mar 22, 2021 at 06:35:10PM +0200, Nikolay Aleksandrov wrote: > > + hlist_for_each_entry(mp, &br->mdb_list, mdb_node) { > > You cannot walk over these lists without the multicast lock or RCU. RTNL is not > enough because of various timers and leave messages that can alter both the mdb_list > and the port group lists. I'd prefer RCU to avoid blocking the bridge mcast. The trouble is that I need to emulate the calling context that is provided to SWITCHDEV_OBJ_ID_HOST_MDB and SWITCHDEV_OBJ_ID_PORT_MDB, and that means blocking context. So if I hold rcu_read_lock(), I need to queue up the mdb entries, and notify the driver only after I leave the RCU critical section. The memory footprint may temporarily blow up. In fact this is what I did in v1: https://patchwork.kernel.org/project/netdevbpf/patch/20210224114350.2791260-15-olteanv@gmail.com/ I just figured I could get away with rtnl_mutex protection, but it looks like I can't. So I guess you prefer my v1?
On 22/03/2021 18:56, Vladimir Oltean wrote: > On Mon, Mar 22, 2021 at 06:35:10PM +0200, Nikolay Aleksandrov wrote: >>> + hlist_for_each_entry(mp, &br->mdb_list, mdb_node) { >> >> You cannot walk over these lists without the multicast lock or RCU. RTNL is not >> enough because of various timers and leave messages that can alter both the mdb_list >> and the port group lists. I'd prefer RCU to avoid blocking the bridge mcast. > > The trouble is that I need to emulate the calling context that is > provided to SWITCHDEV_OBJ_ID_HOST_MDB and SWITCHDEV_OBJ_ID_PORT_MDB, and > that means blocking context. > > So if I hold rcu_read_lock(), I need to queue up the mdb entries, and > notify the driver only after I leave the RCU critical section. The > memory footprint may temporarily blow up. > > In fact this is what I did in v1: > https://patchwork.kernel.org/project/netdevbpf/patch/20210224114350.2791260-15-olteanv@gmail.com/ > > I just figured I could get away with rtnl_mutex protection, but it looks > like I can't. So I guess you prefer my v1? > Indeed, if you need a blocking context then you'd have to go with v1.
diff --git a/include/linux/if_bridge.h b/include/linux/if_bridge.h index ebd16495459c..f6472969bb44 100644 --- a/include/linux/if_bridge.h +++ b/include/linux/if_bridge.h @@ -69,6 +69,8 @@ bool br_multicast_has_querier_anywhere(struct net_device *dev, int proto); bool br_multicast_has_querier_adjacent(struct net_device *dev, int proto); bool br_multicast_enabled(const struct net_device *dev); bool br_multicast_router(const struct net_device *dev); +int br_mdb_replay(struct net_device *br_dev, struct net_device *dev, + struct notifier_block *nb, struct netlink_ext_ack *extack); #else static inline int br_multicast_list_adjacent(struct net_device *dev, struct list_head *br_ip_list) @@ -93,6 +95,13 @@ static inline bool br_multicast_router(const struct net_device *dev) { return false; } +static inline int br_mdb_replay(struct net_device *br_dev, + struct net_device *dev, + struct notifier_block *nb, + struct netlink_ext_ack *extack) +{ + return -EOPNOTSUPP; +} #endif #if IS_ENABLED(CONFIG_BRIDGE) && IS_ENABLED(CONFIG_BRIDGE_VLAN_FILTERING) diff --git a/net/bridge/br_mdb.c b/net/bridge/br_mdb.c index 8846c5bcd075..23973186094c 100644 --- a/net/bridge/br_mdb.c +++ b/net/bridge/br_mdb.c @@ -506,6 +506,90 @@ static void br_mdb_complete(struct net_device *dev, int err, void *priv) kfree(priv); } +static int br_mdb_replay_one(struct notifier_block *nb, struct net_device *dev, + struct net_bridge_mdb_entry *mp, int obj_id, + struct net_device *orig_dev, + struct netlink_ext_ack *extack) +{ + struct switchdev_notifier_port_obj_info obj_info = { + .info = { + .dev = dev, + .extack = extack, + }, + }; + struct switchdev_obj_port_mdb mdb = { + .obj = { + .orig_dev = orig_dev, + .id = obj_id, + }, + .vid = mp->addr.vid, + }; + int err; + + if (mp->addr.proto == htons(ETH_P_IP)) + ip_eth_mc_map(mp->addr.dst.ip4, mdb.addr); +#if IS_ENABLED(CONFIG_IPV6) + else if (mp->addr.proto == htons(ETH_P_IPV6)) + ipv6_eth_mc_map(&mp->addr.dst.ip6, mdb.addr); +#endif + else + ether_addr_copy(mdb.addr, mp->addr.dst.mac_addr); + + obj_info.obj = &mdb.obj; + + err = nb->notifier_call(nb, SWITCHDEV_PORT_OBJ_ADD, &obj_info); + return notifier_to_errno(err); +} + +int br_mdb_replay(struct net_device *br_dev, struct net_device *dev, + struct notifier_block *nb, struct netlink_ext_ack *extack) +{ + struct net_bridge_mdb_entry *mp; + struct list_head mdb_list; + struct net_bridge *br; + int err = 0; + + ASSERT_RTNL(); + + INIT_LIST_HEAD(&mdb_list); + + if (!netif_is_bridge_master(br_dev) || !netif_is_bridge_port(dev)) + return -EINVAL; + + br = netdev_priv(br_dev); + + if (!br_opt_get(br, BROPT_MULTICAST_ENABLED)) + return 0; + + hlist_for_each_entry(mp, &br->mdb_list, mdb_node) { + struct net_bridge_port_group __rcu **pp; + struct net_bridge_port_group *p; + + if (mp->host_joined) { + err = br_mdb_replay_one(nb, dev, mp, + SWITCHDEV_OBJ_ID_HOST_MDB, + br_dev, extack); + if (err) + return err; + } + + for (pp = &mp->ports; (p = rtnl_dereference(*pp)) != NULL; + pp = &p->next) { + if (p->key.port->dev != dev) + continue; + + err = br_mdb_replay_one(nb, dev, mp, + SWITCHDEV_OBJ_ID_PORT_MDB, + dev, extack); + if (err) + return err; + } + } + + return 0; +} +EXPORT_SYMBOL(br_mdb_replay); + static void br_mdb_switchdev_host_port(struct net_device *dev, struct net_device *lower_dev, struct net_bridge_mdb_entry *mp, diff --git a/net/dsa/dsa_priv.h b/net/dsa/dsa_priv.h index b8778c5d8529..b14c43cb88bb 100644 --- a/net/dsa/dsa_priv.h +++ b/net/dsa/dsa_priv.h @@ -262,6 +262,8 @@ static inline bool dsa_tree_offloads_bridge_port(struct dsa_switch_tree *dst, /* slave.c */ extern const struct dsa_device_ops notag_netdev_ops; +extern struct notifier_block dsa_slave_switchdev_blocking_notifier; + void dsa_slave_mii_bus_init(struct dsa_switch *ds); int dsa_slave_create(struct dsa_port *dp); void dsa_slave_destroy(struct net_device *slave_dev); diff --git a/net/dsa/port.c b/net/dsa/port.c index 95e6f2861290..3e61e9e6675c 100644 --- a/net/dsa/port.c +++ b/net/dsa/port.c @@ -199,6 +199,12 @@ static int dsa_port_switchdev_sync(struct dsa_port *dp, if (err && err != -EOPNOTSUPP) return err; + err = br_mdb_replay(br, brport_dev, + &dsa_slave_switchdev_blocking_notifier, + extack); + if (err && err != -EOPNOTSUPP) + return err; + return 0; } diff --git a/net/dsa/slave.c b/net/dsa/slave.c index 1ff48be476bb..b974d8f84a2e 100644 --- a/net/dsa/slave.c +++ b/net/dsa/slave.c @@ -2396,7 +2396,7 @@ static struct notifier_block dsa_slave_switchdev_notifier = { .notifier_call = dsa_slave_switchdev_event, }; -static struct notifier_block dsa_slave_switchdev_blocking_notifier = { +struct notifier_block dsa_slave_switchdev_blocking_notifier = { .notifier_call = dsa_slave_switchdev_blocking_event, };