From patchwork Mon Jul 19 17:06:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nikolay Aleksandrov X-Patchwork-Id: 12386387 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9CD8C63793 for ; Mon, 19 Jul 2021 17:22:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9913E6124B for ; Mon, 19 Jul 2021 17:22:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1354739AbhGSQj2 (ORCPT ); Mon, 19 Jul 2021 12:39:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35290 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1355614AbhGSQg1 (ORCPT ); Mon, 19 Jul 2021 12:36:27 -0400 Received: from mail-ed1-x534.google.com (mail-ed1-x534.google.com [IPv6:2a00:1450:4864:20::534]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 84721C04F95E for ; Mon, 19 Jul 2021 09:49:44 -0700 (PDT) Received: by mail-ed1-x534.google.com with SMTP id ec55so24997382edb.1 for ; Mon, 19 Jul 2021 10:09:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=blackwall-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JNXW9ngpUv8X2/tyC+6jncNWuBegfmJL3KLqEcgq9Fw=; b=wg1e+luGabj2DNwsTy3QL+A1cbJU+P85GJ6LpnJWhBavbjwIV8G/hKsXePNlFEKDNb 8PVbSp9DGjD8348CyVtfLhHgQDoLF34vYJ5kMYeTQfb8Tu6gaFr5KoOZWn1qaibinoya GZaHLSpXb+LcMZNyk9SxB/IfyYPFSB0R5wC9Wq/2+6LNS0kNyHDVko2G0AhAD8OgI1/7 Fp5NAVqFbrD5CU3Q5fe6N1Yge96OnMs90L+7VF2FzxHnslO3ZNYfX4Qc8XF5D7mMr+8E UwAI3dhg+XtVUy5Gj+hUTStSEaxWPjejgv3X+zJjTvb3elitxvN+lhXgxl8fvbP3wSO9 Xo/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JNXW9ngpUv8X2/tyC+6jncNWuBegfmJL3KLqEcgq9Fw=; b=UKyne8dKvZU6Y+SLPKaBvJG3DIZ1lgeahpGPtKZHq88KcKBbiX4OWB1j1Ywzg5Psb7 iEO1yS/dX9mFU+uPcb9QgLxp771fsLGg5463uQr3X8bBurgpini9oT6ibKBehwLpR60Z oKkMuT/geyknhSorqV6KidjhqhPCbiArGoyBb0cLnL2XfYcgQT7DrazE3rF+oyv+ILcX bnedZq/HnTaCsPCEEPnFPGDB6EIoktQCwVu1L0xNuD6MUC9U5+sCj4vbvEMKHy5T55B9 3tCIkCmmpCvgBqjVueWfwU2YGbIggovQKhVwlehRQcmtDaSVc0Nz3liI7BK+25QYetj8 M8ew== X-Gm-Message-State: AOAM530O9HngIIaGa6pTHE91cm7x9ddJvs+QyYq6f74ELqhFGIsNjKYk y65LRYLQHZaLxfOCWC4wKSmUIeleXHr9NdhO1J4= X-Google-Smtp-Source: ABdhPJzdCS0xbRIMIhiECk6f0rf5CUUoXYSlbyabr5PQd8LCzHSbvHjmJrytfIh+rMP06OE4cRt4Sw== X-Received: by 2002:a05:6402:5244:: with SMTP id t4mr26554432edd.346.1626714597281; Mon, 19 Jul 2021 10:09:57 -0700 (PDT) Received: from debil.vdiclient.nvidia.com (84-238-136-197.ip.btc-net.bg. [84.238.136.197]) by smtp.gmail.com with ESMTPSA id nc29sm6073896ejc.10.2021.07.19.10.09.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Jul 2021 10:09:56 -0700 (PDT) From: Nikolay Aleksandrov To: netdev@vger.kernel.org Cc: roopa@nvidia.com, bridge@lists.linux-foundation.org, Nikolay Aleksandrov Subject: [PATCH net-next 01/15] net: bridge: multicast: factor out port multicast context Date: Mon, 19 Jul 2021 20:06:23 +0300 Message-Id: <20210719170637.435541-2-razor@blackwall.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210719170637.435541-1-razor@blackwall.org> References: <20210719170637.435541-1-razor@blackwall.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Nikolay Aleksandrov Factor out the port's multicast context into a separate structure which will later be shared for per-port,vlan context. No functional changes intended. We need the structure even if bridge multicast is not defined to pass down as pointer to forwarding functions. Signed-off-by: Nikolay Aleksandrov --- net/bridge/br_mdb.c | 10 +- net/bridge/br_multicast.c | 186 ++++++++++++++++++++++---------------- net/bridge/br_netlink.c | 2 +- net/bridge/br_private.h | 45 ++++++--- net/bridge/br_sysfs_if.c | 2 +- 5 files changed, 146 insertions(+), 99 deletions(-) diff --git a/net/bridge/br_mdb.c b/net/bridge/br_mdb.c index 17a720b4473f..64619dc65bc8 100644 --- a/net/bridge/br_mdb.c +++ b/net/bridge/br_mdb.c @@ -29,16 +29,16 @@ static bool br_rports_have_mc_router(struct net_bridge *br) static bool br_ip4_rports_get_timer(struct net_bridge_port *port, unsigned long *timer) { - *timer = br_timer_value(&port->ip4_mc_router_timer); - return !hlist_unhashed(&port->ip4_rlist); + *timer = br_timer_value(&port->multicast_ctx.ip4_mc_router_timer); + return !hlist_unhashed(&port->multicast_ctx.ip4_rlist); } static bool br_ip6_rports_get_timer(struct net_bridge_port *port, unsigned long *timer) { #if IS_ENABLED(CONFIG_IPV6) - *timer = br_timer_value(&port->ip6_mc_router_timer); - return !hlist_unhashed(&port->ip6_rlist); + *timer = br_timer_value(&port->multicast_ctx.ip6_mc_router_timer); + return !hlist_unhashed(&port->multicast_ctx.ip6_rlist); #else *timer = 0; return false; @@ -79,7 +79,7 @@ static int br_rports_fill_info(struct sk_buff *skb, struct netlink_callback *cb, nla_put_u32(skb, MDBA_ROUTER_PATTR_TIMER, max(ip4_timer, ip6_timer)) || nla_put_u8(skb, MDBA_ROUTER_PATTR_TYPE, - p->multicast_router) || + p->multicast_ctx.multicast_router) || (have_ip4_mc_rtr && nla_put_u32(skb, MDBA_ROUTER_PATTR_INET_TIMER, ip4_timer)) || diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c index d0434dc8c03b..3abb673ee4ee 100644 --- a/net/bridge/br_multicast.c +++ b/net/bridge/br_multicast.c @@ -1368,50 +1368,52 @@ static bool br_multicast_rport_del(struct hlist_node *rlist) static bool br_ip4_multicast_rport_del(struct net_bridge_port *p) { - return br_multicast_rport_del(&p->ip4_rlist); + return br_multicast_rport_del(&p->multicast_ctx.ip4_rlist); } static bool br_ip6_multicast_rport_del(struct net_bridge_port *p) { #if IS_ENABLED(CONFIG_IPV6) - return br_multicast_rport_del(&p->ip6_rlist); + return br_multicast_rport_del(&p->multicast_ctx.ip6_rlist); #else return false; #endif } -static void br_multicast_router_expired(struct net_bridge_port *port, +static void br_multicast_router_expired(struct net_bridge_mcast_port *pmctx, struct timer_list *t, struct hlist_node *rlist) { - struct net_bridge *br = port->br; + struct net_bridge *br = pmctx->port->br; bool del; spin_lock(&br->multicast_lock); - if (port->multicast_router == MDB_RTR_TYPE_DISABLED || - port->multicast_router == MDB_RTR_TYPE_PERM || + if (pmctx->multicast_router == MDB_RTR_TYPE_DISABLED || + pmctx->multicast_router == MDB_RTR_TYPE_PERM || timer_pending(t)) goto out; del = br_multicast_rport_del(rlist); - br_multicast_rport_del_notify(port, del); + br_multicast_rport_del_notify(pmctx->port, del); out: spin_unlock(&br->multicast_lock); } static void br_ip4_multicast_router_expired(struct timer_list *t) { - struct net_bridge_port *port = from_timer(port, t, ip4_mc_router_timer); + struct net_bridge_mcast_port *pmctx = from_timer(pmctx, t, + ip4_mc_router_timer); - br_multicast_router_expired(port, t, &port->ip4_rlist); + br_multicast_router_expired(pmctx, t, &pmctx->ip4_rlist); } #if IS_ENABLED(CONFIG_IPV6) static void br_ip6_multicast_router_expired(struct timer_list *t) { - struct net_bridge_port *port = from_timer(port, t, ip6_mc_router_timer); + struct net_bridge_mcast_port *pmctx = from_timer(pmctx, t, + ip6_mc_router_timer); - br_multicast_router_expired(port, t, &port->ip6_rlist); + br_multicast_router_expired(pmctx, t, &pmctx->ip6_rlist); } #endif @@ -1555,7 +1557,7 @@ static void br_multicast_send_query(struct net_bridge *br, memset(&br_group.dst, 0, sizeof(br_group.dst)); - if (port ? (own_query == &port->ip4_own_query) : + if (port ? (own_query == &port->multicast_ctx.ip4_own_query) : (own_query == &br->ip4_own_query)) { other_query = &br->ip4_other_query; br_group.proto = htons(ETH_P_IP); @@ -1580,20 +1582,20 @@ static void br_multicast_send_query(struct net_bridge *br, } static void -br_multicast_port_query_expired(struct net_bridge_port *port, +br_multicast_port_query_expired(struct net_bridge_mcast_port *pmctx, struct bridge_mcast_own_query *query) { - struct net_bridge *br = port->br; + struct net_bridge *br = pmctx->port->br; spin_lock(&br->multicast_lock); - if (port->state == BR_STATE_DISABLED || - port->state == BR_STATE_BLOCKING) + if (pmctx->port->state == BR_STATE_DISABLED || + pmctx->port->state == BR_STATE_BLOCKING) goto out; if (query->startup_sent < br->multicast_startup_query_count) query->startup_sent++; - br_multicast_send_query(port->br, port, query); + br_multicast_send_query(pmctx->port->br, pmctx->port, query); out: spin_unlock(&br->multicast_lock); @@ -1601,17 +1603,19 @@ br_multicast_port_query_expired(struct net_bridge_port *port, static void br_ip4_multicast_port_query_expired(struct timer_list *t) { - struct net_bridge_port *port = from_timer(port, t, ip4_own_query.timer); + struct net_bridge_mcast_port *pmctx = from_timer(pmctx, t, + ip4_own_query.timer); - br_multicast_port_query_expired(port, &port->ip4_own_query); + br_multicast_port_query_expired(pmctx, &pmctx->ip4_own_query); } #if IS_ENABLED(CONFIG_IPV6) static void br_ip6_multicast_port_query_expired(struct timer_list *t) { - struct net_bridge_port *port = from_timer(port, t, ip6_own_query.timer); + struct net_bridge_mcast_port *pmctx = from_timer(pmctx, t, + ip6_own_query.timer); - br_multicast_port_query_expired(port, &port->ip6_own_query); + br_multicast_port_query_expired(pmctx, &pmctx->ip6_own_query); } #endif @@ -1666,23 +1670,38 @@ static int br_mc_disabled_update(struct net_device *dev, bool value, return switchdev_port_attr_set(dev, &attr, extack); } -int br_multicast_add_port(struct net_bridge_port *port) +static void br_multicast_port_ctx_init(struct net_bridge_port *port, + struct net_bridge_mcast_port *pmctx) { - int err; - - port->multicast_router = MDB_RTR_TYPE_TEMP_QUERY; - port->multicast_eht_hosts_limit = BR_MCAST_DEFAULT_EHT_HOSTS_LIMIT; - - timer_setup(&port->ip4_mc_router_timer, + pmctx->port = port; + pmctx->multicast_router = MDB_RTR_TYPE_TEMP_QUERY; + timer_setup(&pmctx->ip4_mc_router_timer, br_ip4_multicast_router_expired, 0); - timer_setup(&port->ip4_own_query.timer, + timer_setup(&pmctx->ip4_own_query.timer, br_ip4_multicast_port_query_expired, 0); #if IS_ENABLED(CONFIG_IPV6) - timer_setup(&port->ip6_mc_router_timer, + timer_setup(&pmctx->ip6_mc_router_timer, br_ip6_multicast_router_expired, 0); - timer_setup(&port->ip6_own_query.timer, + timer_setup(&pmctx->ip6_own_query.timer, br_ip6_multicast_port_query_expired, 0); #endif +} + +static void br_multicast_port_ctx_deinit(struct net_bridge_mcast_port *pmctx) +{ +#if IS_ENABLED(CONFIG_IPV6) + del_timer_sync(&pmctx->ip6_mc_router_timer); +#endif + del_timer_sync(&pmctx->ip4_mc_router_timer); +} + +int br_multicast_add_port(struct net_bridge_port *port) +{ + int err; + + port->multicast_eht_hosts_limit = BR_MCAST_DEFAULT_EHT_HOSTS_LIMIT; + br_multicast_port_ctx_init(port, &port->multicast_ctx); + err = br_mc_disabled_update(port->dev, br_opt_get(port->br, BROPT_MULTICAST_ENABLED), @@ -1711,10 +1730,7 @@ void br_multicast_del_port(struct net_bridge_port *port) hlist_move_list(&br->mcast_gc_list, &deleted_head); spin_unlock_bh(&br->multicast_lock); br_multicast_gc(&deleted_head); - del_timer_sync(&port->ip4_mc_router_timer); -#if IS_ENABLED(CONFIG_IPV6) - del_timer_sync(&port->ip6_mc_router_timer); -#endif + br_multicast_port_ctx_deinit(&port->multicast_ctx); free_percpu(port->mcast_stats); } @@ -1734,11 +1750,11 @@ static void __br_multicast_enable_port(struct net_bridge_port *port) if (!br_opt_get(br, BROPT_MULTICAST_ENABLED) || !netif_running(br->dev)) return; - br_multicast_enable(&port->ip4_own_query); + br_multicast_enable(&port->multicast_ctx.ip4_own_query); #if IS_ENABLED(CONFIG_IPV6) - br_multicast_enable(&port->ip6_own_query); + br_multicast_enable(&port->multicast_ctx.ip6_own_query); #endif - if (port->multicast_router == MDB_RTR_TYPE_PERM) { + if (port->multicast_ctx.multicast_router == MDB_RTR_TYPE_PERM) { br_ip4_multicast_add_router(br, port); br_ip6_multicast_add_router(br, port); } @@ -1766,12 +1782,12 @@ void br_multicast_disable_port(struct net_bridge_port *port) br_multicast_find_del_pg(br, pg); del |= br_ip4_multicast_rport_del(port); - del_timer(&port->ip4_mc_router_timer); - del_timer(&port->ip4_own_query.timer); + del_timer(&port->multicast_ctx.ip4_mc_router_timer); + del_timer(&port->multicast_ctx.ip4_own_query.timer); del |= br_ip6_multicast_rport_del(port); #if IS_ENABLED(CONFIG_IPV6) - del_timer(&port->ip6_mc_router_timer); - del_timer(&port->ip6_own_query.timer); + del_timer(&port->multicast_ctx.ip6_mc_router_timer); + del_timer(&port->multicast_ctx.ip6_own_query.timer); #endif br_multicast_rport_del_notify(port, del); spin_unlock(&br->multicast_lock); @@ -2713,11 +2729,18 @@ br_multicast_rport_from_node(struct net_bridge *br, struct hlist_head *mc_router_list, struct hlist_node *rlist) { + struct net_bridge_mcast_port *pmctx; + #if IS_ENABLED(CONFIG_IPV6) if (mc_router_list == &br->ip6_mc_router_list) - return hlist_entry(rlist, struct net_bridge_port, ip6_rlist); + pmctx = hlist_entry(rlist, struct net_bridge_mcast_port, + ip6_rlist); + else #endif - return hlist_entry(rlist, struct net_bridge_port, ip4_rlist); + pmctx = hlist_entry(rlist, struct net_bridge_mcast_port, + ip4_rlist); + + return pmctx->port; } static struct hlist_node * @@ -2746,10 +2769,10 @@ static bool br_multicast_no_router_otherpf(struct net_bridge_port *port, struct hlist_node *rnode) { #if IS_ENABLED(CONFIG_IPV6) - if (rnode != &port->ip6_rlist) - return hlist_unhashed(&port->ip6_rlist); + if (rnode != &port->multicast_ctx.ip6_rlist) + return hlist_unhashed(&port->multicast_ctx.ip6_rlist); else - return hlist_unhashed(&port->ip4_rlist); + return hlist_unhashed(&port->multicast_ctx.ip4_rlist); #else return true; #endif @@ -2793,7 +2816,7 @@ static void br_multicast_add_router(struct net_bridge *br, static void br_ip4_multicast_add_router(struct net_bridge *br, struct net_bridge_port *port) { - br_multicast_add_router(br, port, &port->ip4_rlist, + br_multicast_add_router(br, port, &port->multicast_ctx.ip4_rlist, &br->ip4_mc_router_list); } @@ -2805,7 +2828,7 @@ static void br_ip6_multicast_add_router(struct net_bridge *br, struct net_bridge_port *port) { #if IS_ENABLED(CONFIG_IPV6) - br_multicast_add_router(br, port, &port->ip6_rlist, + br_multicast_add_router(br, port, &port->multicast_ctx.ip6_rlist, &br->ip6_mc_router_list); #endif } @@ -2828,8 +2851,8 @@ static void br_multicast_mark_router(struct net_bridge *br, return; } - if (port->multicast_router == MDB_RTR_TYPE_DISABLED || - port->multicast_router == MDB_RTR_TYPE_PERM) + if (port->multicast_ctx.multicast_router == MDB_RTR_TYPE_DISABLED || + port->multicast_ctx.multicast_router == MDB_RTR_TYPE_PERM) return; br_multicast_add_router(br, port, rlist, mc_router_list); @@ -2843,8 +2866,8 @@ static void br_ip4_multicast_mark_router(struct net_bridge *br, struct hlist_node *rlist = NULL; if (port) { - timer = &port->ip4_mc_router_timer; - rlist = &port->ip4_rlist; + timer = &port->multicast_ctx.ip4_mc_router_timer; + rlist = &port->multicast_ctx.ip4_rlist; } br_multicast_mark_router(br, port, timer, rlist, @@ -2859,8 +2882,8 @@ static void br_ip6_multicast_mark_router(struct net_bridge *br, struct hlist_node *rlist = NULL; if (port) { - timer = &port->ip6_mc_router_timer; - rlist = &port->ip6_rlist; + timer = &port->multicast_ctx.ip6_mc_router_timer; + rlist = &port->multicast_ctx.ip6_rlist; } br_multicast_mark_router(br, port, timer, rlist, @@ -3183,7 +3206,8 @@ static void br_ip4_multicast_leave_group(struct net_bridge *br, if (ipv4_is_local_multicast(group)) return; - own_query = port ? &port->ip4_own_query : &br->ip4_own_query; + own_query = port ? &port->multicast_ctx.ip4_own_query : + &br->ip4_own_query; memset(&br_group, 0, sizeof(br_group)); br_group.dst.ip4 = group; @@ -3207,7 +3231,8 @@ static void br_ip6_multicast_leave_group(struct net_bridge *br, if (ipv6_addr_is_ll_all_nodes(group)) return; - own_query = port ? &port->ip6_own_query : &br->ip6_own_query; + own_query = port ? &port->multicast_ctx.ip6_own_query : + &br->ip6_own_query; memset(&br_group, 0, sizeof(br_group)); br_group.dst.ip6 = *group; @@ -3668,10 +3693,10 @@ br_multicast_rport_del_notify(struct net_bridge_port *p, bool deleted) /* For backwards compatibility for now, only notify if there is * no multicast router anymore for both IPv4 and IPv6. */ - if (!hlist_unhashed(&p->ip4_rlist)) + if (!hlist_unhashed(&p->multicast_ctx.ip4_rlist)) return; #if IS_ENABLED(CONFIG_IPV6) - if (!hlist_unhashed(&p->ip6_rlist)) + if (!hlist_unhashed(&p->multicast_ctx.ip6_rlist)) return; #endif @@ -3679,8 +3704,8 @@ br_multicast_rport_del_notify(struct net_bridge_port *p, bool deleted) br_port_mc_router_state_change(p, false); /* don't allow timer refresh */ - if (p->multicast_router == MDB_RTR_TYPE_TEMP) - p->multicast_router = MDB_RTR_TYPE_TEMP_QUERY; + if (p->multicast_ctx.multicast_router == MDB_RTR_TYPE_TEMP) + p->multicast_ctx.multicast_router = MDB_RTR_TYPE_TEMP_QUERY; } int br_multicast_set_port_router(struct net_bridge_port *p, unsigned long val) @@ -3691,13 +3716,13 @@ int br_multicast_set_port_router(struct net_bridge_port *p, unsigned long val) bool del = false; spin_lock(&br->multicast_lock); - if (p->multicast_router == val) { + if (p->multicast_ctx.multicast_router == val) { /* Refresh the temp router port timer */ - if (p->multicast_router == MDB_RTR_TYPE_TEMP) { - mod_timer(&p->ip4_mc_router_timer, + if (p->multicast_ctx.multicast_router == MDB_RTR_TYPE_TEMP) { + mod_timer(&p->multicast_ctx.ip4_mc_router_timer, now + br->multicast_querier_interval); #if IS_ENABLED(CONFIG_IPV6) - mod_timer(&p->ip6_mc_router_timer, + mod_timer(&p->multicast_ctx.ip6_mc_router_timer, now + br->multicast_querier_interval); #endif } @@ -3706,32 +3731,32 @@ int br_multicast_set_port_router(struct net_bridge_port *p, unsigned long val) } switch (val) { case MDB_RTR_TYPE_DISABLED: - p->multicast_router = MDB_RTR_TYPE_DISABLED; + p->multicast_ctx.multicast_router = MDB_RTR_TYPE_DISABLED; del |= br_ip4_multicast_rport_del(p); - del_timer(&p->ip4_mc_router_timer); + del_timer(&p->multicast_ctx.ip4_mc_router_timer); del |= br_ip6_multicast_rport_del(p); #if IS_ENABLED(CONFIG_IPV6) - del_timer(&p->ip6_mc_router_timer); + del_timer(&p->multicast_ctx.ip6_mc_router_timer); #endif br_multicast_rport_del_notify(p, del); break; case MDB_RTR_TYPE_TEMP_QUERY: - p->multicast_router = MDB_RTR_TYPE_TEMP_QUERY; + p->multicast_ctx.multicast_router = MDB_RTR_TYPE_TEMP_QUERY; del |= br_ip4_multicast_rport_del(p); del |= br_ip6_multicast_rport_del(p); br_multicast_rport_del_notify(p, del); break; case MDB_RTR_TYPE_PERM: - p->multicast_router = MDB_RTR_TYPE_PERM; - del_timer(&p->ip4_mc_router_timer); + p->multicast_ctx.multicast_router = MDB_RTR_TYPE_PERM; + del_timer(&p->multicast_ctx.ip4_mc_router_timer); br_ip4_multicast_add_router(br, p); #if IS_ENABLED(CONFIG_IPV6) - del_timer(&p->ip6_mc_router_timer); + del_timer(&p->multicast_ctx.ip6_mc_router_timer); #endif br_ip6_multicast_add_router(br, p); break; case MDB_RTR_TYPE_TEMP: - p->multicast_router = MDB_RTR_TYPE_TEMP; + p->multicast_ctx.multicast_router = MDB_RTR_TYPE_TEMP; br_ip4_multicast_mark_router(br, p); br_ip6_multicast_mark_router(br, p); break; @@ -3759,10 +3784,10 @@ static void br_multicast_start_querier(struct net_bridge *br, continue; if (query == &br->ip4_own_query) - br_multicast_enable(&port->ip4_own_query); + br_multicast_enable(&port->multicast_ctx.ip4_own_query); #if IS_ENABLED(CONFIG_IPV6) else - br_multicast_enable(&port->ip6_own_query); + br_multicast_enable(&port->multicast_ctx.ip6_own_query); #endif } rcu_read_unlock(); @@ -4071,7 +4096,8 @@ EXPORT_SYMBOL_GPL(br_multicast_has_querier_adjacent); */ bool br_multicast_has_router_adjacent(struct net_device *dev, int proto) { - struct net_bridge_port *port, *p; + struct net_bridge_mcast_port *pmctx; + struct net_bridge_port *port; bool ret = false; rcu_read_lock(); @@ -4081,9 +4107,9 @@ bool br_multicast_has_router_adjacent(struct net_device *dev, int proto) switch (proto) { case ETH_P_IP: - hlist_for_each_entry_rcu(p, &port->br->ip4_mc_router_list, + hlist_for_each_entry_rcu(pmctx, &port->br->ip4_mc_router_list, ip4_rlist) { - if (p == port) + if (pmctx->port == port) continue; ret = true; @@ -4092,9 +4118,9 @@ bool br_multicast_has_router_adjacent(struct net_device *dev, int proto) break; #if IS_ENABLED(CONFIG_IPV6) case ETH_P_IPV6: - hlist_for_each_entry_rcu(p, &port->br->ip6_mc_router_list, + hlist_for_each_entry_rcu(pmctx, &port->br->ip6_mc_router_list, ip6_rlist) { - if (p == port) + if (pmctx->port == port) continue; ret = true; diff --git a/net/bridge/br_netlink.c b/net/bridge/br_netlink.c index d366420f40f1..0cbe5826cfe8 100644 --- a/net/bridge/br_netlink.c +++ b/net/bridge/br_netlink.c @@ -287,7 +287,7 @@ static int br_port_fill_attrs(struct sk_buff *skb, #ifdef CONFIG_BRIDGE_IGMP_SNOOPING if (nla_put_u8(skb, IFLA_BRPORT_MULTICAST_ROUTER, - p->multicast_router) || + p->multicast_ctx.multicast_router) || nla_put_u32(skb, IFLA_BRPORT_MCAST_EHT_HOSTS_LIMIT, p->multicast_eht_hosts_limit) || nla_put_u32(skb, IFLA_BRPORT_MCAST_EHT_HOSTS_CNT, diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h index 97ced5e10d04..61034fd8a3bd 100644 --- a/net/bridge/br_private.h +++ b/net/bridge/br_private.h @@ -89,6 +89,23 @@ struct bridge_mcast_stats { }; #endif +/* net_bridge_mcast_port must be always defined due to forwarding stubs */ +struct net_bridge_mcast_port { +#ifdef CONFIG_BRIDGE_IGMP_SNOOPING + struct net_bridge_port *port; + + struct bridge_mcast_own_query ip4_own_query; + struct timer_list ip4_mc_router_timer; + struct hlist_node ip4_rlist; +#if IS_ENABLED(CONFIG_IPV6) + struct bridge_mcast_own_query ip6_own_query; + struct timer_list ip6_mc_router_timer; + struct hlist_node ip6_rlist; +#endif /* IS_ENABLED(CONFIG_IPV6) */ + unsigned char multicast_router; +#endif /* CONFIG_BRIDGE_IGMP_SNOOPING */ +}; + struct br_tunnel_info { __be64 tunnel_id; struct metadata_dst __rcu *tunnel_dst; @@ -313,19 +330,13 @@ struct net_bridge_port { struct kobject kobj; struct rcu_head rcu; + struct net_bridge_mcast_port multicast_ctx; + #ifdef CONFIG_BRIDGE_IGMP_SNOOPING - struct bridge_mcast_own_query ip4_own_query; - struct timer_list ip4_mc_router_timer; - struct hlist_node ip4_rlist; -#if IS_ENABLED(CONFIG_IPV6) - struct bridge_mcast_own_query ip6_own_query; - struct timer_list ip6_mc_router_timer; - struct hlist_node ip6_rlist; -#endif /* IS_ENABLED(CONFIG_IPV6) */ + struct bridge_mcast_stats __percpu *mcast_stats; + u32 multicast_eht_hosts_limit; u32 multicast_eht_hosts_cnt; - unsigned char multicast_router; - struct bridge_mcast_stats __percpu *mcast_stats; struct hlist_head mglist; #endif @@ -892,11 +903,21 @@ br_multicast_get_first_rport_node(struct net_bridge *b, struct sk_buff *skb) { static inline struct net_bridge_port * br_multicast_rport_from_node_skb(struct hlist_node *rp, struct sk_buff *skb) { + struct net_bridge_mcast_port *mctx; + #if IS_ENABLED(CONFIG_IPV6) if (skb->protocol == htons(ETH_P_IPV6)) - return hlist_entry_safe(rp, struct net_bridge_port, ip6_rlist); + mctx = hlist_entry_safe(rp, struct net_bridge_mcast_port, + ip6_rlist); + else #endif - return hlist_entry_safe(rp, struct net_bridge_port, ip4_rlist); + mctx = hlist_entry_safe(rp, struct net_bridge_mcast_port, + ip4_rlist); + + if (mctx) + return mctx->port; + else + return NULL; } static inline bool br_ip4_multicast_is_router(struct net_bridge *br) diff --git a/net/bridge/br_sysfs_if.c b/net/bridge/br_sysfs_if.c index 72e92376eef1..e9e3aedd3178 100644 --- a/net/bridge/br_sysfs_if.c +++ b/net/bridge/br_sysfs_if.c @@ -244,7 +244,7 @@ BRPORT_ATTR_FLAG(isolated, BR_ISOLATED); #ifdef CONFIG_BRIDGE_IGMP_SNOOPING static ssize_t show_multicast_router(struct net_bridge_port *p, char *buf) { - return sprintf(buf, "%d\n", p->multicast_router); + return sprintf(buf, "%d\n", p->multicast_ctx.multicast_router); } static int store_multicast_router(struct net_bridge_port *p, From patchwork Mon Jul 19 17:06:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nikolay Aleksandrov X-Patchwork-Id: 12386379 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ADE98C6377C for ; Mon, 19 Jul 2021 17:22:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6F80A61164 for ; Mon, 19 Jul 2021 17:22:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1353347AbhGSQjX (ORCPT ); Mon, 19 Jul 2021 12:39:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36050 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1355613AbhGSQg1 (ORCPT ); Mon, 19 Jul 2021 12:36:27 -0400 Received: from mail-ed1-x52c.google.com (mail-ed1-x52c.google.com [IPv6:2a00:1450:4864:20::52c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 29483C04F966 for ; Mon, 19 Jul 2021 09:49:46 -0700 (PDT) Received: by mail-ed1-x52c.google.com with SMTP id ec55so24997471edb.1 for ; Mon, 19 Jul 2021 10:10:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=blackwall-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=M1gS+FOa179JN+HZznGsv5629EP8DgmKrhT0+XPnWss=; b=crWpvbO3blsvd/xWNfu2RHUEMHcT1W/5K4ZQHpGKNT/Dl2Q2JLuDweaDmYnzgomBiv KwtVP5D/fgYCcWYpeMiFXrBcBz6WX5AzdDdGg2urdpZD9p5rVcXaF3bAqglw5ndnR9fO sagv4a5kSxzvWB06KG1AX/H62X5mFRC05PCVbdV+KxOtwOcwB+rOB6uY59h4SRdPIcMo a9dQzFDKLSF3byzABxMNVty6gXA/hrJFgIKqClzyHa4EUqlGIzW1vEe7vW+/3/j3YVOx f6k9sSwMb3CPzza/14zilhsbMM/BAU/+sT9wNxLkc1W11ud1fW1R1P3397oP6rm5mxzU YyIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=M1gS+FOa179JN+HZznGsv5629EP8DgmKrhT0+XPnWss=; b=No4ZBUbVgi+L9fp7w7M+huUELXh1X/KDbwpDNe7F/f7V6diVXrG3WJWbH9OCR/pTtf aYhRPRDMmQmbknrRIc6Dn5ZOnqnAiUDjFIphtTUn735EZ9tORSLlYRSd1aRSsmainuZE rLIQgbSycMce7CM63RnG7aE0mMCw3eQiJAsJuoQ4tUanHcbX22CZYgpizoa1Y7D29RFe RE4Mfn8dFUUWfsfRbtNt0aJ6UV7T1I+f/Kp0+v69C3YVoY+ikKigEMaNXIvHJBThr3qk 5srfQ2NNSsjh1nK4Loy1a/fRsUuW5ARGtgKCUuw1qId+DZ7GVqsPBGhnZDKCu/m5+pWa +LQg== X-Gm-Message-State: AOAM5320uzDpr3JN93a/gZpg3F2Syn7lXCXaE5ARXBqHTCu2jEzApMfG l2ZeP79+Lv8b6RPgihxflhZx9irbx8/pNiNz/0g= X-Google-Smtp-Source: ABdhPJzyUdM0WB621QkCHvEsDAunbjvlw/qcB7oRh3Ef2avNE+9jyUhlu7zHTLECZDxdttWxWrCiYg== X-Received: by 2002:aa7:d746:: with SMTP id a6mr35387137eds.296.1626714598361; Mon, 19 Jul 2021 10:09:58 -0700 (PDT) Received: from debil.vdiclient.nvidia.com (84-238-136-197.ip.btc-net.bg. [84.238.136.197]) by smtp.gmail.com with ESMTPSA id nc29sm6073896ejc.10.2021.07.19.10.09.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Jul 2021 10:09:57 -0700 (PDT) From: Nikolay Aleksandrov To: netdev@vger.kernel.org Cc: roopa@nvidia.com, bridge@lists.linux-foundation.org, Nikolay Aleksandrov Subject: [PATCH net-next 02/15] net: bridge: multicast: factor out bridge multicast context Date: Mon, 19 Jul 2021 20:06:24 +0300 Message-Id: <20210719170637.435541-3-razor@blackwall.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210719170637.435541-1-razor@blackwall.org> References: <20210719170637.435541-1-razor@blackwall.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Nikolay Aleksandrov Factor out the bridge's global multicast context into a separate structure which will later be used for per-vlan global context. No functional changes intended. Signed-off-by: Nikolay Aleksandrov --- net/bridge/br_mdb.c | 23 +-- net/bridge/br_multicast.c | 398 +++++++++++++++++++++----------------- net/bridge/br_netlink.c | 39 ++-- net/bridge/br_private.h | 112 ++++++----- net/bridge/br_sysfs_br.c | 38 ++-- 5 files changed, 335 insertions(+), 275 deletions(-) diff --git a/net/bridge/br_mdb.c b/net/bridge/br_mdb.c index 64619dc65bc8..effe03c08038 100644 --- a/net/bridge/br_mdb.c +++ b/net/bridge/br_mdb.c @@ -16,13 +16,13 @@ #include "br_private.h" -static bool br_rports_have_mc_router(struct net_bridge *br) +static bool br_rports_have_mc_router(struct net_bridge_mcast *brmctx) { #if IS_ENABLED(CONFIG_IPV6) - return !hlist_empty(&br->ip4_mc_router_list) || - !hlist_empty(&br->ip6_mc_router_list); + return !hlist_empty(&brmctx->ip4_mc_router_list) || + !hlist_empty(&brmctx->ip6_mc_router_list); #else - return !hlist_empty(&br->ip4_mc_router_list); + return !hlist_empty(&brmctx->ip4_mc_router_list); #endif } @@ -54,10 +54,10 @@ static int br_rports_fill_info(struct sk_buff *skb, struct netlink_callback *cb, struct nlattr *nest, *port_nest; struct net_bridge_port *p; - if (!br->multicast_router) + if (!br->multicast_ctx.multicast_router) return 0; - if (!br_rports_have_mc_router(br)) + if (!br_rports_have_mc_router(&br->multicast_ctx)) return 0; nest = nla_nest_start_noflag(skb, MDBA_ROUTER); @@ -240,7 +240,7 @@ static int __mdb_fill_info(struct sk_buff *skb, switch (mp->addr.proto) { case htons(ETH_P_IP): - dump_srcs_mode = !!(mp->br->multicast_igmp_version == 3); + dump_srcs_mode = !!(mp->br->multicast_ctx.multicast_igmp_version == 3); if (mp->addr.src.ip4) { if (nla_put_in_addr(skb, MDBA_MDB_EATTR_SOURCE, mp->addr.src.ip4)) @@ -250,7 +250,7 @@ static int __mdb_fill_info(struct sk_buff *skb, break; #if IS_ENABLED(CONFIG_IPV6) case htons(ETH_P_IPV6): - dump_srcs_mode = !!(mp->br->multicast_mld_version == 2); + dump_srcs_mode = !!(mp->br->multicast_ctx.multicast_mld_version == 2); if (!ipv6_addr_any(&mp->addr.src.ip6)) { if (nla_put_in6_addr(skb, MDBA_MDB_EATTR_SOURCE, &mp->addr.src.ip6)) @@ -483,7 +483,7 @@ static size_t rtnl_mdb_nlmsg_size(struct net_bridge_port_group *pg) /* MDBA_MDB_EATTR_SOURCE */ if (pg->key.addr.src.ip4) nlmsg_size += nla_total_size(sizeof(__be32)); - if (pg->key.port->br->multicast_igmp_version == 2) + if (pg->key.port->br->multicast_ctx.multicast_igmp_version == 2) goto out; addr_size = sizeof(__be32); break; @@ -492,7 +492,7 @@ static size_t rtnl_mdb_nlmsg_size(struct net_bridge_port_group *pg) /* MDBA_MDB_EATTR_SOURCE */ if (!ipv6_addr_any(&pg->key.addr.src.ip6)) nlmsg_size += nla_total_size(sizeof(struct in6_addr)); - if (pg->key.port->br->multicast_mld_version == 1) + if (pg->key.port->br->multicast_ctx.multicast_mld_version == 1) goto out; addr_size = sizeof(struct in6_addr); break; @@ -1084,7 +1084,8 @@ static int br_mdb_add_group(struct net_bridge *br, struct net_bridge_port *port, } rcu_assign_pointer(*pp, p); if (entry->state == MDB_TEMPORARY) - mod_timer(&p->timer, now + br->multicast_membership_interval); + mod_timer(&p->timer, + now + br->multicast_ctx.multicast_membership_interval); br_mdb_notify(br->dev, mp, p, RTM_NEWMDB); /* if we are adding a new EXCLUDE port group (*,G) it needs to be also * added to all S,G entries for proper replication, if we are adding diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c index 3abb673ee4ee..92bfc1d95cd5 100644 --- a/net/bridge/br_multicast.c +++ b/net/bridge/br_multicast.c @@ -158,7 +158,7 @@ struct net_bridge_mdb_entry *br_mdb_get(struct net_bridge *br, switch (skb->protocol) { case htons(ETH_P_IP): ip.dst.ip4 = ip_hdr(skb)->daddr; - if (br->multicast_igmp_version == 3) { + if (br->multicast_ctx.multicast_igmp_version == 3) { struct net_bridge_mdb_entry *mdb; ip.src.ip4 = ip_hdr(skb)->saddr; @@ -171,7 +171,7 @@ struct net_bridge_mdb_entry *br_mdb_get(struct net_bridge *br, #if IS_ENABLED(CONFIG_IPV6) case htons(ETH_P_IPV6): ip.dst.ip6 = ipv6_hdr(skb)->daddr; - if (br->multicast_mld_version == 2) { + if (br->multicast_ctx.multicast_mld_version == 2) { struct net_bridge_mdb_entry *mdb; ip.src.ip6 = ipv6_hdr(skb)->saddr; @@ -699,6 +699,7 @@ static struct sk_buff *br_ip4_multicast_alloc_query(struct net_bridge *br, u8 sflag, u8 *igmp_type, bool *need_rexmit) { + struct net_bridge_mcast *brmctx = &br->multicast_ctx; struct net_bridge_port *p = pg ? pg->key.port : NULL; struct net_bridge_group_src *ent; size_t pkt_size, igmp_hdr_size; @@ -714,11 +715,11 @@ static struct sk_buff *br_ip4_multicast_alloc_query(struct net_bridge *br, u16 lmqt_srcs = 0; igmp_hdr_size = sizeof(*ih); - if (br->multicast_igmp_version == 3) { + if (brmctx->multicast_igmp_version == 3) { igmp_hdr_size = sizeof(*ihv3); if (pg && with_srcs) { - lmqt = now + (br->multicast_last_member_interval * - br->multicast_last_member_count); + lmqt = now + (brmctx->multicast_last_member_interval * + brmctx->multicast_last_member_count); hlist_for_each_entry(ent, &pg->src_list, node) { if (over_lmqt == time_after(ent->timer.expires, lmqt) && @@ -775,12 +776,12 @@ static struct sk_buff *br_ip4_multicast_alloc_query(struct net_bridge *br, skb_set_transport_header(skb, skb->len); *igmp_type = IGMP_HOST_MEMBERSHIP_QUERY; - switch (br->multicast_igmp_version) { + switch (brmctx->multicast_igmp_version) { case 2: ih = igmp_hdr(skb); ih->type = IGMP_HOST_MEMBERSHIP_QUERY; - ih->code = (group ? br->multicast_last_member_interval : - br->multicast_query_response_interval) / + ih->code = (group ? brmctx->multicast_last_member_interval : + brmctx->multicast_query_response_interval) / (HZ / IGMP_TIMER_SCALE); ih->group = group; ih->csum = 0; @@ -790,11 +791,11 @@ static struct sk_buff *br_ip4_multicast_alloc_query(struct net_bridge *br, case 3: ihv3 = igmpv3_query_hdr(skb); ihv3->type = IGMP_HOST_MEMBERSHIP_QUERY; - ihv3->code = (group ? br->multicast_last_member_interval : - br->multicast_query_response_interval) / + ihv3->code = (group ? brmctx->multicast_last_member_interval : + brmctx->multicast_query_response_interval) / (HZ / IGMP_TIMER_SCALE); ihv3->group = group; - ihv3->qqic = br->multicast_query_interval / HZ; + ihv3->qqic = brmctx->multicast_query_interval / HZ; ihv3->nsrcs = htons(lmqt_srcs); ihv3->resv = 0; ihv3->suppress = sflag; @@ -845,6 +846,7 @@ static struct sk_buff *br_ip6_multicast_alloc_query(struct net_bridge *br, u8 sflag, u8 *igmp_type, bool *need_rexmit) { + struct net_bridge_mcast *brmctx = &br->multicast_ctx; struct net_bridge_port *p = pg ? pg->key.port : NULL; struct net_bridge_group_src *ent; size_t pkt_size, mld_hdr_size; @@ -862,11 +864,11 @@ static struct sk_buff *br_ip6_multicast_alloc_query(struct net_bridge *br, u8 *hopopt; mld_hdr_size = sizeof(*mldq); - if (br->multicast_mld_version == 2) { + if (brmctx->multicast_mld_version == 2) { mld_hdr_size = sizeof(*mld2q); if (pg && with_srcs) { - llqt = now + (br->multicast_last_member_interval * - br->multicast_last_member_count); + llqt = now + (brmctx->multicast_last_member_interval * + brmctx->multicast_last_member_count); hlist_for_each_entry(ent, &pg->src_list, node) { if (over_llqt == time_after(ent->timer.expires, llqt) && @@ -933,10 +935,10 @@ static struct sk_buff *br_ip6_multicast_alloc_query(struct net_bridge *br, /* ICMPv6 */ skb_set_transport_header(skb, skb->len); interval = ipv6_addr_any(group) ? - br->multicast_query_response_interval : - br->multicast_last_member_interval; + brmctx->multicast_query_response_interval : + brmctx->multicast_last_member_interval; *igmp_type = ICMPV6_MGM_QUERY; - switch (br->multicast_mld_version) { + switch (brmctx->multicast_mld_version) { case 1: mldq = (struct mld_msg *)icmp6_hdr(skb); mldq->mld_type = ICMPV6_MGM_QUERY; @@ -959,7 +961,7 @@ static struct sk_buff *br_ip6_multicast_alloc_query(struct net_bridge *br, mld2q->mld2q_suppress = sflag; mld2q->mld2q_qrv = 2; mld2q->mld2q_nsrcs = htons(llqt_srcs); - mld2q->mld2q_qqic = br->multicast_query_interval / HZ; + mld2q->mld2q_qqic = brmctx->multicast_query_interval / HZ; mld2q->mld2q_mca = *group; csum = &mld2q->mld2q_cksum; csum_start = (void *)mld2q; @@ -1219,7 +1221,8 @@ void br_multicast_host_join(struct net_bridge_mdb_entry *mp, bool notify) if (br_group_is_l2(&mp->addr)) return; - mod_timer(&mp->timer, jiffies + mp->br->multicast_membership_interval); + mod_timer(&mp->timer, + jiffies + mp->br->multicast_ctx.multicast_membership_interval); } void br_multicast_host_leave(struct net_bridge_mdb_entry *mp, bool notify) @@ -1283,7 +1286,8 @@ __br_multicast_add_group(struct net_bridge *br, found: if (igmpv2_mldv1) - mod_timer(&p->timer, now + br->multicast_membership_interval); + mod_timer(&p->timer, + now + br->multicast_ctx.multicast_membership_interval); out: return p; @@ -1430,63 +1434,68 @@ static void br_mc_router_state_change(struct net_bridge *p, switchdev_port_attr_set(p->dev, &attr, NULL); } -static void br_multicast_local_router_expired(struct net_bridge *br, +static void br_multicast_local_router_expired(struct net_bridge_mcast *brmctx, struct timer_list *timer) { - spin_lock(&br->multicast_lock); - if (br->multicast_router == MDB_RTR_TYPE_DISABLED || - br->multicast_router == MDB_RTR_TYPE_PERM || - br_ip4_multicast_is_router(br) || - br_ip6_multicast_is_router(br)) + spin_lock(&brmctx->br->multicast_lock); + if (brmctx->multicast_router == MDB_RTR_TYPE_DISABLED || + brmctx->multicast_router == MDB_RTR_TYPE_PERM || + br_ip4_multicast_is_router(brmctx) || + br_ip6_multicast_is_router(brmctx)) goto out; - br_mc_router_state_change(br, false); + br_mc_router_state_change(brmctx->br, false); out: - spin_unlock(&br->multicast_lock); + spin_unlock(&brmctx->br->multicast_lock); } static void br_ip4_multicast_local_router_expired(struct timer_list *t) { - struct net_bridge *br = from_timer(br, t, ip4_mc_router_timer); + struct net_bridge_mcast *brmctx = from_timer(brmctx, t, + ip4_mc_router_timer); - br_multicast_local_router_expired(br, t); + br_multicast_local_router_expired(brmctx, t); } #if IS_ENABLED(CONFIG_IPV6) static void br_ip6_multicast_local_router_expired(struct timer_list *t) { - struct net_bridge *br = from_timer(br, t, ip6_mc_router_timer); + struct net_bridge_mcast *brmctx = from_timer(brmctx, t, + ip6_mc_router_timer); - br_multicast_local_router_expired(br, t); + br_multicast_local_router_expired(brmctx, t); } #endif -static void br_multicast_querier_expired(struct net_bridge *br, +static void br_multicast_querier_expired(struct net_bridge_mcast *brmctx, struct bridge_mcast_own_query *query) { - spin_lock(&br->multicast_lock); - if (!netif_running(br->dev) || !br_opt_get(br, BROPT_MULTICAST_ENABLED)) + spin_lock(&brmctx->br->multicast_lock); + if (!netif_running(brmctx->br->dev) || + !br_opt_get(brmctx->br, BROPT_MULTICAST_ENABLED)) goto out; - br_multicast_start_querier(br, query); + br_multicast_start_querier(brmctx->br, query); out: - spin_unlock(&br->multicast_lock); + spin_unlock(&brmctx->br->multicast_lock); } static void br_ip4_multicast_querier_expired(struct timer_list *t) { - struct net_bridge *br = from_timer(br, t, ip4_other_query.timer); + struct net_bridge_mcast *brmctx = from_timer(brmctx, t, + ip4_other_query.timer); - br_multicast_querier_expired(br, &br->ip4_own_query); + br_multicast_querier_expired(brmctx, &brmctx->ip4_own_query); } #if IS_ENABLED(CONFIG_IPV6) static void br_ip6_multicast_querier_expired(struct timer_list *t) { - struct net_bridge *br = from_timer(br, t, ip6_other_query.timer); + struct net_bridge_mcast *brmctx = from_timer(brmctx, t, + ip6_other_query.timer); - br_multicast_querier_expired(br, &br->ip6_own_query); + br_multicast_querier_expired(brmctx, &brmctx->ip6_own_query); } #endif @@ -1494,11 +1503,13 @@ static void br_multicast_select_own_querier(struct net_bridge *br, struct br_ip *ip, struct sk_buff *skb) { + struct net_bridge_mcast *brmctx = &br->multicast_ctx; + if (ip->proto == htons(ETH_P_IP)) - br->ip4_querier.addr.src.ip4 = ip_hdr(skb)->saddr; + brmctx->ip4_querier.addr.src.ip4 = ip_hdr(skb)->saddr; #if IS_ENABLED(CONFIG_IPV6) else - br->ip6_querier.addr.src.ip6 = ipv6_hdr(skb)->saddr; + brmctx->ip6_querier.addr.src.ip6 = ipv6_hdr(skb)->saddr; #endif } @@ -1546,6 +1557,7 @@ static void br_multicast_send_query(struct net_bridge *br, struct net_bridge_port *port, struct bridge_mcast_own_query *own_query) { + struct net_bridge_mcast *brmctx = &br->multicast_ctx; struct bridge_mcast_other_query *other_query = NULL; struct br_ip br_group; unsigned long time; @@ -1558,12 +1570,12 @@ static void br_multicast_send_query(struct net_bridge *br, memset(&br_group.dst, 0, sizeof(br_group.dst)); if (port ? (own_query == &port->multicast_ctx.ip4_own_query) : - (own_query == &br->ip4_own_query)) { - other_query = &br->ip4_other_query; + (own_query == &brmctx->ip4_own_query)) { + other_query = &brmctx->ip4_other_query; br_group.proto = htons(ETH_P_IP); #if IS_ENABLED(CONFIG_IPV6) } else { - other_query = &br->ip6_other_query; + other_query = &brmctx->ip6_other_query; br_group.proto = htons(ETH_P_IPV6); #endif } @@ -1575,9 +1587,9 @@ static void br_multicast_send_query(struct net_bridge *br, NULL); time = jiffies; - time += own_query->startup_sent < br->multicast_startup_query_count ? - br->multicast_startup_query_interval : - br->multicast_query_interval; + time += own_query->startup_sent < brmctx->multicast_startup_query_count ? + brmctx->multicast_startup_query_interval : + brmctx->multicast_query_interval; mod_timer(&own_query->timer, time); } @@ -1592,7 +1604,7 @@ br_multicast_port_query_expired(struct net_bridge_mcast_port *pmctx, pmctx->port->state == BR_STATE_BLOCKING) goto out; - if (query->startup_sent < br->multicast_startup_query_count) + if (query->startup_sent < br->multicast_ctx.multicast_startup_query_count) query->startup_sent++; br_multicast_send_query(pmctx->port->br, pmctx->port, query); @@ -1624,6 +1636,7 @@ static void br_multicast_port_group_rexmit(struct timer_list *t) struct net_bridge_port_group *pg = from_timer(pg, t, rexmit_timer); struct bridge_mcast_other_query *other_query = NULL; struct net_bridge *br = pg->key.port->br; + struct net_bridge_mcast *brmctx = &br->multicast_ctx; bool need_rexmit = false; spin_lock(&br->multicast_lock); @@ -1633,10 +1646,10 @@ static void br_multicast_port_group_rexmit(struct timer_list *t) goto out; if (pg->key.addr.proto == htons(ETH_P_IP)) - other_query = &br->ip4_other_query; + other_query = &brmctx->ip4_other_query; #if IS_ENABLED(CONFIG_IPV6) else - other_query = &br->ip6_other_query; + other_query = &brmctx->ip6_other_query; #endif if (!other_query || timer_pending(&other_query->timer)) @@ -1652,7 +1665,7 @@ static void br_multicast_port_group_rexmit(struct timer_list *t) if (pg->grp_query_rexmit_cnt || need_rexmit) mod_timer(&pg->rexmit_timer, jiffies + - br->multicast_last_member_interval); + brmctx->multicast_last_member_interval); out: spin_unlock(&br->multicast_lock); } @@ -1819,7 +1832,8 @@ static void __grp_src_query_marked_and_rexmit(struct net_bridge_port_group *pg) { struct bridge_mcast_other_query *other_query = NULL; struct net_bridge *br = pg->key.port->br; - u32 lmqc = br->multicast_last_member_count; + struct net_bridge_mcast *brmctx = &br->multicast_ctx; + u32 lmqc = brmctx->multicast_last_member_count; unsigned long lmqt, lmi, now = jiffies; struct net_bridge_group_src *ent; @@ -1828,10 +1842,10 @@ static void __grp_src_query_marked_and_rexmit(struct net_bridge_port_group *pg) return; if (pg->key.addr.proto == htons(ETH_P_IP)) - other_query = &br->ip4_other_query; + other_query = &brmctx->ip4_other_query; #if IS_ENABLED(CONFIG_IPV6) else - other_query = &br->ip6_other_query; + other_query = &brmctx->ip6_other_query; #endif lmqt = now + br_multicast_lmqt(br); @@ -1855,7 +1869,7 @@ static void __grp_src_query_marked_and_rexmit(struct net_bridge_port_group *pg) __br_multicast_send_query(br, pg->key.port, pg, &pg->key.addr, &pg->key.addr, true, 1, NULL); - lmi = now + br->multicast_last_member_interval; + lmi = now + brmctx->multicast_last_member_interval; if (!timer_pending(&pg->rexmit_timer) || time_after(pg->rexmit_timer.expires, lmi)) mod_timer(&pg->rexmit_timer, lmi); @@ -1865,6 +1879,7 @@ static void __grp_send_query_and_rexmit(struct net_bridge_port_group *pg) { struct bridge_mcast_other_query *other_query = NULL; struct net_bridge *br = pg->key.port->br; + struct net_bridge_mcast *brmctx = &br->multicast_ctx; unsigned long now = jiffies, lmi; if (!netif_running(br->dev) || @@ -1872,16 +1887,16 @@ static void __grp_send_query_and_rexmit(struct net_bridge_port_group *pg) return; if (pg->key.addr.proto == htons(ETH_P_IP)) - other_query = &br->ip4_other_query; + other_query = &brmctx->ip4_other_query; #if IS_ENABLED(CONFIG_IPV6) else - other_query = &br->ip6_other_query; + other_query = &brmctx->ip6_other_query; #endif if (br_opt_get(br, BROPT_MULTICAST_QUERIER) && other_query && !timer_pending(&other_query->timer)) { - lmi = now + br->multicast_last_member_interval; - pg->grp_query_rexmit_cnt = br->multicast_last_member_count - 1; + lmi = now + brmctx->multicast_last_member_interval; + pg->grp_query_rexmit_cnt = brmctx->multicast_last_member_count - 1; __br_multicast_send_query(br, pg->key.port, pg, &pg->key.addr, &pg->key.addr, false, 0, NULL); if (!timer_pending(&pg->rexmit_timer) || @@ -2405,7 +2420,7 @@ static int br_ip4_multicast_igmp3_report(struct net_bridge *br, struct sk_buff *skb, u16 vid) { - bool igmpv2 = br->multicast_igmp_version == 2; + bool igmpv2 = br->multicast_ctx.multicast_igmp_version == 2; struct net_bridge_mdb_entry *mdst; struct net_bridge_port_group *pg; const unsigned char *src; @@ -2517,7 +2532,7 @@ static int br_ip6_multicast_mld2_report(struct net_bridge *br, struct sk_buff *skb, u16 vid) { - bool mldv1 = br->multicast_mld_version == 1; + bool mldv1 = br->multicast_ctx.multicast_mld_version == 1; struct net_bridge_mdb_entry *mdst; struct net_bridge_port_group *pg; unsigned int nsrcs_offset; @@ -2655,23 +2670,25 @@ static bool br_ip4_multicast_select_querier(struct net_bridge *br, struct net_bridge_port *port, __be32 saddr) { - if (!timer_pending(&br->ip4_own_query.timer) && - !timer_pending(&br->ip4_other_query.timer)) + struct net_bridge_mcast *brmctx = &br->multicast_ctx; + + if (!timer_pending(&brmctx->ip4_own_query.timer) && + !timer_pending(&brmctx->ip4_other_query.timer)) goto update; - if (!br->ip4_querier.addr.src.ip4) + if (!brmctx->ip4_querier.addr.src.ip4) goto update; - if (ntohl(saddr) <= ntohl(br->ip4_querier.addr.src.ip4)) + if (ntohl(saddr) <= ntohl(brmctx->ip4_querier.addr.src.ip4)) goto update; return false; update: - br->ip4_querier.addr.src.ip4 = saddr; + brmctx->ip4_querier.addr.src.ip4 = saddr; /* update protected by general multicast_lock by caller */ - rcu_assign_pointer(br->ip4_querier.port, port); + rcu_assign_pointer(brmctx->ip4_querier.port, port); return true; } @@ -2681,20 +2698,22 @@ static bool br_ip6_multicast_select_querier(struct net_bridge *br, struct net_bridge_port *port, struct in6_addr *saddr) { - if (!timer_pending(&br->ip6_own_query.timer) && - !timer_pending(&br->ip6_other_query.timer)) + struct net_bridge_mcast *brmctx = &br->multicast_ctx; + + if (!timer_pending(&brmctx->ip6_own_query.timer) && + !timer_pending(&brmctx->ip6_other_query.timer)) goto update; - if (ipv6_addr_cmp(saddr, &br->ip6_querier.addr.src.ip6) <= 0) + if (ipv6_addr_cmp(saddr, &brmctx->ip6_querier.addr.src.ip6) <= 0) goto update; return false; update: - br->ip6_querier.addr.src.ip6 = *saddr; + brmctx->ip6_querier.addr.src.ip6 = *saddr; /* update protected by general multicast_lock by caller */ - rcu_assign_pointer(br->ip6_querier.port, port); + rcu_assign_pointer(brmctx->ip6_querier.port, port); return true; } @@ -2708,7 +2727,8 @@ br_multicast_update_query_timer(struct net_bridge *br, if (!timer_pending(&query->timer)) query->delay_time = jiffies + max_delay; - mod_timer(&query->timer, jiffies + br->multicast_querier_interval); + mod_timer(&query->timer, jiffies + + br->multicast_ctx.multicast_querier_interval); } static void br_port_mc_router_state_change(struct net_bridge_port *p, @@ -2725,14 +2745,14 @@ static void br_port_mc_router_state_change(struct net_bridge_port *p, } static struct net_bridge_port * -br_multicast_rport_from_node(struct net_bridge *br, +br_multicast_rport_from_node(struct net_bridge_mcast *brmctx, struct hlist_head *mc_router_list, struct hlist_node *rlist) { struct net_bridge_mcast_port *pmctx; #if IS_ENABLED(CONFIG_IPV6) - if (mc_router_list == &br->ip6_mc_router_list) + if (mc_router_list == &brmctx->ip6_mc_router_list) pmctx = hlist_entry(rlist, struct net_bridge_mcast_port, ip6_rlist); else @@ -2744,7 +2764,7 @@ br_multicast_rport_from_node(struct net_bridge *br, } static struct hlist_node * -br_multicast_get_rport_slot(struct net_bridge *br, +br_multicast_get_rport_slot(struct net_bridge_mcast *brmctx, struct net_bridge_port *port, struct hlist_head *mc_router_list) @@ -2754,7 +2774,7 @@ br_multicast_get_rport_slot(struct net_bridge *br, struct hlist_node *rlist; hlist_for_each(rlist, mc_router_list) { - p = br_multicast_rport_from_node(br, mc_router_list, rlist); + p = br_multicast_rport_from_node(brmctx, mc_router_list, rlist); if ((unsigned long)port >= (unsigned long)p) break; @@ -2782,7 +2802,7 @@ static bool br_multicast_no_router_otherpf(struct net_bridge_port *port, * list is maintained ordered by pointer value * and locked by br->multicast_lock and RCU */ -static void br_multicast_add_router(struct net_bridge *br, +static void br_multicast_add_router(struct net_bridge_mcast *brmctx, struct net_bridge_port *port, struct hlist_node *rlist, struct hlist_head *mc_router_list) @@ -2792,7 +2812,7 @@ static void br_multicast_add_router(struct net_bridge *br, if (!hlist_unhashed(rlist)) return; - slot = br_multicast_get_rport_slot(br, port, mc_router_list); + slot = br_multicast_get_rport_slot(brmctx, port, mc_router_list); if (slot) hlist_add_behind_rcu(rlist, slot); @@ -2804,7 +2824,7 @@ static void br_multicast_add_router(struct net_bridge *br, * IPv4 or IPv6 multicast router. */ if (br_multicast_no_router_otherpf(port, rlist)) { - br_rtr_notify(br->dev, port, RTM_NEWMDB); + br_rtr_notify(port->br->dev, port, RTM_NEWMDB); br_port_mc_router_state_change(port, true); } } @@ -2816,8 +2836,9 @@ static void br_multicast_add_router(struct net_bridge *br, static void br_ip4_multicast_add_router(struct net_bridge *br, struct net_bridge_port *port) { - br_multicast_add_router(br, port, &port->multicast_ctx.ip4_rlist, - &br->ip4_mc_router_list); + br_multicast_add_router(&br->multicast_ctx, port, + &port->multicast_ctx.ip4_rlist, + &br->multicast_ctx.ip4_mc_router_list); } /* Add port to router_list @@ -2828,8 +2849,9 @@ static void br_ip6_multicast_add_router(struct net_bridge *br, struct net_bridge_port *port) { #if IS_ENABLED(CONFIG_IPV6) - br_multicast_add_router(br, port, &port->multicast_ctx.ip6_rlist, - &br->ip6_mc_router_list); + br_multicast_add_router(&br->multicast_ctx, port, + &port->multicast_ctx.ip6_rlist, + &br->multicast_ctx.ip6_mc_router_list); #endif } @@ -2839,14 +2861,15 @@ static void br_multicast_mark_router(struct net_bridge *br, struct hlist_node *rlist, struct hlist_head *mc_router_list) { + struct net_bridge_mcast *brmctx = &br->multicast_ctx; unsigned long now = jiffies; if (!port) { - if (br->multicast_router == MDB_RTR_TYPE_TEMP_QUERY) { - if (!br_ip4_multicast_is_router(br) && - !br_ip6_multicast_is_router(br)) + if (brmctx->multicast_router == MDB_RTR_TYPE_TEMP_QUERY) { + if (!br_ip4_multicast_is_router(brmctx) && + !br_ip6_multicast_is_router(brmctx)) br_mc_router_state_change(br, true); - mod_timer(timer, now + br->multicast_querier_interval); + mod_timer(timer, now + brmctx->multicast_querier_interval); } return; } @@ -2855,14 +2878,14 @@ static void br_multicast_mark_router(struct net_bridge *br, port->multicast_ctx.multicast_router == MDB_RTR_TYPE_PERM) return; - br_multicast_add_router(br, port, rlist, mc_router_list); - mod_timer(timer, now + br->multicast_querier_interval); + br_multicast_add_router(brmctx, port, rlist, mc_router_list); + mod_timer(timer, now + brmctx->multicast_querier_interval); } static void br_ip4_multicast_mark_router(struct net_bridge *br, struct net_bridge_port *port) { - struct timer_list *timer = &br->ip4_mc_router_timer; + struct timer_list *timer = &br->multicast_ctx.ip4_mc_router_timer; struct hlist_node *rlist = NULL; if (port) { @@ -2871,14 +2894,14 @@ static void br_ip4_multicast_mark_router(struct net_bridge *br, } br_multicast_mark_router(br, port, timer, rlist, - &br->ip4_mc_router_list); + &br->multicast_ctx.ip4_mc_router_list); } static void br_ip6_multicast_mark_router(struct net_bridge *br, struct net_bridge_port *port) { #if IS_ENABLED(CONFIG_IPV6) - struct timer_list *timer = &br->ip6_mc_router_timer; + struct timer_list *timer = &br->multicast_ctx.ip6_mc_router_timer; struct hlist_node *rlist = NULL; if (port) { @@ -2887,7 +2910,7 @@ static void br_ip6_multicast_mark_router(struct net_bridge *br, } br_multicast_mark_router(br, port, timer, rlist, - &br->ip6_mc_router_list); + &br->multicast_ctx.ip6_mc_router_list); #endif } @@ -2926,6 +2949,7 @@ static void br_ip4_multicast_query(struct net_bridge *br, struct sk_buff *skb, u16 vid) { + struct net_bridge_mcast *brmctx = &br->multicast_ctx; unsigned int transport_len = ip_transport_len(skb); const struct iphdr *iph = ip_hdr(skb); struct igmphdr *ih = igmp_hdr(skb); @@ -2955,7 +2979,8 @@ static void br_ip4_multicast_query(struct net_bridge *br, } else if (transport_len >= sizeof(*ih3)) { ih3 = igmpv3_query_hdr(skb); if (ih3->nsrcs || - (br->multicast_igmp_version == 3 && group && ih3->suppress)) + (brmctx->multicast_igmp_version == 3 && group && + ih3->suppress)) goto out; max_delay = ih3->code ? @@ -2968,7 +2993,8 @@ static void br_ip4_multicast_query(struct net_bridge *br, saddr.proto = htons(ETH_P_IP); saddr.src.ip4 = iph->saddr; - br_ip4_multicast_query_received(br, port, &br->ip4_other_query, + br_ip4_multicast_query_received(br, port, + &brmctx->ip4_other_query, &saddr, max_delay); goto out; } @@ -2977,7 +3003,7 @@ static void br_ip4_multicast_query(struct net_bridge *br, if (!mp) goto out; - max_delay *= br->multicast_last_member_count; + max_delay *= brmctx->multicast_last_member_count; if (mp->host_joined && (timer_pending(&mp->timer) ? @@ -2991,7 +3017,7 @@ static void br_ip4_multicast_query(struct net_bridge *br, if (timer_pending(&p->timer) ? time_after(p->timer.expires, now + max_delay) : try_to_del_timer_sync(&p->timer) >= 0 && - (br->multicast_igmp_version == 2 || + (brmctx->multicast_igmp_version == 2 || p->filter_mode == MCAST_EXCLUDE)) mod_timer(&p->timer, now + max_delay); } @@ -3006,6 +3032,7 @@ static int br_ip6_multicast_query(struct net_bridge *br, struct sk_buff *skb, u16 vid) { + struct net_bridge_mcast *brmctx = &br->multicast_ctx; unsigned int transport_len = ipv6_transport_len(skb); struct mld_msg *mld; struct net_bridge_mdb_entry *mp; @@ -3042,7 +3069,7 @@ static int br_ip6_multicast_query(struct net_bridge *br, mld2q = (struct mld2_query *)icmp6_hdr(skb); if (!mld2q->mld2q_nsrcs) group = &mld2q->mld2q_mca; - if (br->multicast_mld_version == 2 && + if (brmctx->multicast_mld_version == 2 && !ipv6_addr_any(&mld2q->mld2q_mca) && mld2q->mld2q_suppress) goto out; @@ -3056,7 +3083,8 @@ static int br_ip6_multicast_query(struct net_bridge *br, saddr.proto = htons(ETH_P_IPV6); saddr.src.ip6 = ipv6_hdr(skb)->saddr; - br_ip6_multicast_query_received(br, port, &br->ip6_other_query, + br_ip6_multicast_query_received(br, port, + &brmctx->ip6_other_query, &saddr, max_delay); goto out; } else if (!group) { @@ -3067,7 +3095,7 @@ static int br_ip6_multicast_query(struct net_bridge *br, if (!mp) goto out; - max_delay *= br->multicast_last_member_count; + max_delay *= brmctx->multicast_last_member_count; if (mp->host_joined && (timer_pending(&mp->timer) ? time_after(mp->timer.expires, now + max_delay) : @@ -3080,7 +3108,7 @@ static int br_ip6_multicast_query(struct net_bridge *br, if (timer_pending(&p->timer) ? time_after(p->timer.expires, now + max_delay) : try_to_del_timer_sync(&p->timer) >= 0 && - (br->multicast_mld_version == 1 || + (brmctx->multicast_mld_version == 1 || p->filter_mode == MCAST_EXCLUDE)) mod_timer(&p->timer, now + max_delay); } @@ -3099,6 +3127,7 @@ br_multicast_leave_group(struct net_bridge *br, struct bridge_mcast_own_query *own_query, const unsigned char *src) { + struct net_bridge_mcast *brmctx = &br->multicast_ctx; struct net_bridge_mdb_entry *mp; struct net_bridge_port_group *p; unsigned long now; @@ -3138,8 +3167,8 @@ br_multicast_leave_group(struct net_bridge *br, __br_multicast_send_query(br, port, NULL, NULL, &mp->addr, false, 0, NULL); - time = jiffies + br->multicast_last_member_count * - br->multicast_last_member_interval; + time = jiffies + brmctx->multicast_last_member_count * + brmctx->multicast_last_member_interval; mod_timer(&own_query->timer, time); @@ -3161,8 +3190,8 @@ br_multicast_leave_group(struct net_bridge *br, } now = jiffies; - time = now + br->multicast_last_member_count * - br->multicast_last_member_interval; + time = now + brmctx->multicast_last_member_count * + brmctx->multicast_last_member_interval; if (!port) { if (mp->host_joined && @@ -3207,14 +3236,15 @@ static void br_ip4_multicast_leave_group(struct net_bridge *br, return; own_query = port ? &port->multicast_ctx.ip4_own_query : - &br->ip4_own_query; + &br->multicast_ctx.ip4_own_query; memset(&br_group, 0, sizeof(br_group)); br_group.dst.ip4 = group; br_group.proto = htons(ETH_P_IP); br_group.vid = vid; - br_multicast_leave_group(br, port, &br_group, &br->ip4_other_query, + br_multicast_leave_group(br, port, &br_group, + &br->multicast_ctx.ip4_other_query, own_query, src); } @@ -3232,14 +3262,15 @@ static void br_ip6_multicast_leave_group(struct net_bridge *br, return; own_query = port ? &port->multicast_ctx.ip6_own_query : - &br->ip6_own_query; + &br->multicast_ctx.ip6_own_query; memset(&br_group, 0, sizeof(br_group)); br_group.dst.ip6 = *group; br_group.proto = htons(ETH_P_IPV6); br_group.vid = vid; - br_multicast_leave_group(br, port, &br_group, &br->ip6_other_query, + br_multicast_leave_group(br, port, &br_group, + &br->multicast_ctx.ip6_other_query, own_query, src); } #endif @@ -3460,7 +3491,7 @@ static void br_multicast_query_expired(struct net_bridge *br, struct bridge_mcast_querier *querier) { spin_lock(&br->multicast_lock); - if (query->startup_sent < br->multicast_startup_query_count) + if (query->startup_sent < br->multicast_ctx.multicast_startup_query_count) query->startup_sent++; RCU_INIT_POINTER(querier->port, NULL); @@ -3470,17 +3501,21 @@ static void br_multicast_query_expired(struct net_bridge *br, static void br_ip4_multicast_query_expired(struct timer_list *t) { - struct net_bridge *br = from_timer(br, t, ip4_own_query.timer); + struct net_bridge_mcast *brmctx = from_timer(brmctx, t, + ip4_own_query.timer); - br_multicast_query_expired(br, &br->ip4_own_query, &br->ip4_querier); + br_multicast_query_expired(brmctx->br, &brmctx->ip4_own_query, + &brmctx->ip4_querier); } #if IS_ENABLED(CONFIG_IPV6) static void br_ip6_multicast_query_expired(struct timer_list *t) { - struct net_bridge *br = from_timer(br, t, ip6_own_query.timer); + struct net_bridge_mcast *brmctx = from_timer(brmctx, t, + ip6_own_query.timer); - br_multicast_query_expired(br, &br->ip6_own_query, &br->ip6_querier); + br_multicast_query_expired(brmctx->br, &brmctx->ip6_own_query, + &brmctx->ip6_querier); } #endif @@ -3501,41 +3536,42 @@ void br_multicast_init(struct net_bridge *br) { br->hash_max = BR_MULTICAST_DEFAULT_HASH_MAX; - br->multicast_router = MDB_RTR_TYPE_TEMP_QUERY; - br->multicast_last_member_count = 2; - br->multicast_startup_query_count = 2; - - br->multicast_last_member_interval = HZ; - br->multicast_query_response_interval = 10 * HZ; - br->multicast_startup_query_interval = 125 * HZ / 4; - br->multicast_query_interval = 125 * HZ; - br->multicast_querier_interval = 255 * HZ; - br->multicast_membership_interval = 260 * HZ; - - br->ip4_other_query.delay_time = 0; - br->ip4_querier.port = NULL; - br->multicast_igmp_version = 2; + br->multicast_ctx.br = br; + br->multicast_ctx.multicast_router = MDB_RTR_TYPE_TEMP_QUERY; + br->multicast_ctx.multicast_last_member_count = 2; + br->multicast_ctx.multicast_startup_query_count = 2; + + br->multicast_ctx.multicast_last_member_interval = HZ; + br->multicast_ctx.multicast_query_response_interval = 10 * HZ; + br->multicast_ctx.multicast_startup_query_interval = 125 * HZ / 4; + br->multicast_ctx.multicast_query_interval = 125 * HZ; + br->multicast_ctx.multicast_querier_interval = 255 * HZ; + br->multicast_ctx.multicast_membership_interval = 260 * HZ; + + br->multicast_ctx.ip4_other_query.delay_time = 0; + br->multicast_ctx.ip4_querier.port = NULL; + br->multicast_ctx.multicast_igmp_version = 2; #if IS_ENABLED(CONFIG_IPV6) - br->multicast_mld_version = 1; - br->ip6_other_query.delay_time = 0; - br->ip6_querier.port = NULL; + br->multicast_ctx.multicast_mld_version = 1; + br->multicast_ctx.ip6_other_query.delay_time = 0; + br->multicast_ctx.ip6_querier.port = NULL; #endif br_opt_toggle(br, BROPT_MULTICAST_ENABLED, true); br_opt_toggle(br, BROPT_HAS_IPV6_ADDR, true); spin_lock_init(&br->multicast_lock); - timer_setup(&br->ip4_mc_router_timer, + timer_setup(&br->multicast_ctx.ip4_mc_router_timer, br_ip4_multicast_local_router_expired, 0); - timer_setup(&br->ip4_other_query.timer, + timer_setup(&br->multicast_ctx.ip4_other_query.timer, br_ip4_multicast_querier_expired, 0); - timer_setup(&br->ip4_own_query.timer, + timer_setup(&br->multicast_ctx.ip4_own_query.timer, br_ip4_multicast_query_expired, 0); #if IS_ENABLED(CONFIG_IPV6) - timer_setup(&br->ip6_mc_router_timer, + timer_setup(&br->multicast_ctx.ip6_mc_router_timer, br_ip6_multicast_local_router_expired, 0); - timer_setup(&br->ip6_other_query.timer, + timer_setup(&br->multicast_ctx.ip6_other_query.timer, br_ip6_multicast_querier_expired, 0); - timer_setup(&br->ip6_own_query.timer, + timer_setup(&br->multicast_ctx.ip6_own_query.timer, br_ip6_multicast_query_expired, 0); #endif INIT_HLIST_HEAD(&br->mdb_list); @@ -3618,21 +3654,21 @@ static void __br_multicast_open(struct net_bridge *br, void br_multicast_open(struct net_bridge *br) { - __br_multicast_open(br, &br->ip4_own_query); + __br_multicast_open(br, &br->multicast_ctx.ip4_own_query); #if IS_ENABLED(CONFIG_IPV6) - __br_multicast_open(br, &br->ip6_own_query); + __br_multicast_open(br, &br->multicast_ctx.ip6_own_query); #endif } void br_multicast_stop(struct net_bridge *br) { - del_timer_sync(&br->ip4_mc_router_timer); - del_timer_sync(&br->ip4_other_query.timer); - del_timer_sync(&br->ip4_own_query.timer); + del_timer_sync(&br->multicast_ctx.ip4_mc_router_timer); + del_timer_sync(&br->multicast_ctx.ip4_other_query.timer); + del_timer_sync(&br->multicast_ctx.ip4_own_query.timer); #if IS_ENABLED(CONFIG_IPV6) - del_timer_sync(&br->ip6_mc_router_timer); - del_timer_sync(&br->ip6_other_query.timer); - del_timer_sync(&br->ip6_own_query.timer); + del_timer_sync(&br->multicast_ctx.ip6_mc_router_timer); + del_timer_sync(&br->multicast_ctx.ip6_other_query.timer); + del_timer_sync(&br->multicast_ctx.ip6_own_query.timer); #endif } @@ -3656,6 +3692,7 @@ void br_multicast_dev_del(struct net_bridge *br) int br_multicast_set_router(struct net_bridge *br, unsigned long val) { + struct net_bridge_mcast *brmctx = &br->multicast_ctx; int err = -EINVAL; spin_lock_bh(&br->multicast_lock); @@ -3664,17 +3701,17 @@ int br_multicast_set_router(struct net_bridge *br, unsigned long val) case MDB_RTR_TYPE_DISABLED: case MDB_RTR_TYPE_PERM: br_mc_router_state_change(br, val == MDB_RTR_TYPE_PERM); - del_timer(&br->ip4_mc_router_timer); + del_timer(&brmctx->ip4_mc_router_timer); #if IS_ENABLED(CONFIG_IPV6) - del_timer(&br->ip6_mc_router_timer); + del_timer(&brmctx->ip6_mc_router_timer); #endif - br->multicast_router = val; + brmctx->multicast_router = val; err = 0; break; case MDB_RTR_TYPE_TEMP_QUERY: - if (br->multicast_router != MDB_RTR_TYPE_TEMP_QUERY) + if (brmctx->multicast_router != MDB_RTR_TYPE_TEMP_QUERY) br_mc_router_state_change(br, false); - br->multicast_router = val; + brmctx->multicast_router = val; err = 0; break; } @@ -3710,20 +3747,20 @@ br_multicast_rport_del_notify(struct net_bridge_port *p, bool deleted) int br_multicast_set_port_router(struct net_bridge_port *p, unsigned long val) { - struct net_bridge *br = p->br; + struct net_bridge_mcast *brmctx = &p->br->multicast_ctx; unsigned long now = jiffies; int err = -EINVAL; bool del = false; - spin_lock(&br->multicast_lock); + spin_lock(&p->br->multicast_lock); if (p->multicast_ctx.multicast_router == val) { /* Refresh the temp router port timer */ if (p->multicast_ctx.multicast_router == MDB_RTR_TYPE_TEMP) { mod_timer(&p->multicast_ctx.ip4_mc_router_timer, - now + br->multicast_querier_interval); + now + brmctx->multicast_querier_interval); #if IS_ENABLED(CONFIG_IPV6) mod_timer(&p->multicast_ctx.ip6_mc_router_timer, - now + br->multicast_querier_interval); + now + brmctx->multicast_querier_interval); #endif } err = 0; @@ -3749,23 +3786,23 @@ int br_multicast_set_port_router(struct net_bridge_port *p, unsigned long val) case MDB_RTR_TYPE_PERM: p->multicast_ctx.multicast_router = MDB_RTR_TYPE_PERM; del_timer(&p->multicast_ctx.ip4_mc_router_timer); - br_ip4_multicast_add_router(br, p); + br_ip4_multicast_add_router(p->br, p); #if IS_ENABLED(CONFIG_IPV6) del_timer(&p->multicast_ctx.ip6_mc_router_timer); #endif - br_ip6_multicast_add_router(br, p); + br_ip6_multicast_add_router(p->br, p); break; case MDB_RTR_TYPE_TEMP: p->multicast_ctx.multicast_router = MDB_RTR_TYPE_TEMP; - br_ip4_multicast_mark_router(br, p); - br_ip6_multicast_mark_router(br, p); + br_ip4_multicast_mark_router(p->br, p); + br_ip6_multicast_mark_router(p->br, p); break; default: goto unlock; } err = 0; unlock: - spin_unlock(&br->multicast_lock); + spin_unlock(&p->br->multicast_lock); return err; } @@ -3783,7 +3820,7 @@ static void br_multicast_start_querier(struct net_bridge *br, port->state == BR_STATE_BLOCKING) continue; - if (query == &br->ip4_own_query) + if (query == &br->multicast_ctx.ip4_own_query) br_multicast_enable(&port->multicast_ctx.ip4_own_query); #if IS_ENABLED(CONFIG_IPV6) else @@ -3872,6 +3909,7 @@ EXPORT_SYMBOL_GPL(br_multicast_router); int br_multicast_set_querier(struct net_bridge *br, unsigned long val) { + struct net_bridge_mcast *brmctx = &br->multicast_ctx; unsigned long max_delay; val = !!val; @@ -3884,18 +3922,18 @@ int br_multicast_set_querier(struct net_bridge *br, unsigned long val) if (!val) goto unlock; - max_delay = br->multicast_query_response_interval; + max_delay = brmctx->multicast_query_response_interval; - if (!timer_pending(&br->ip4_other_query.timer)) - br->ip4_other_query.delay_time = jiffies + max_delay; + if (!timer_pending(&brmctx->ip4_other_query.timer)) + brmctx->ip4_other_query.delay_time = jiffies + max_delay; - br_multicast_start_querier(br, &br->ip4_own_query); + br_multicast_start_querier(br, &brmctx->ip4_own_query); #if IS_ENABLED(CONFIG_IPV6) - if (!timer_pending(&br->ip6_other_query.timer)) - br->ip6_other_query.delay_time = jiffies + max_delay; + if (!timer_pending(&brmctx->ip6_other_query.timer)) + brmctx->ip6_other_query.delay_time = jiffies + max_delay; - br_multicast_start_querier(br, &br->ip6_own_query); + br_multicast_start_querier(br, &brmctx->ip6_own_query); #endif unlock: @@ -3916,7 +3954,7 @@ int br_multicast_set_igmp_version(struct net_bridge *br, unsigned long val) } spin_lock_bh(&br->multicast_lock); - br->multicast_igmp_version = val; + br->multicast_ctx.multicast_igmp_version = val; spin_unlock_bh(&br->multicast_lock); return 0; @@ -3935,7 +3973,7 @@ int br_multicast_set_mld_version(struct net_bridge *br, unsigned long val) } spin_lock_bh(&br->multicast_lock); - br->multicast_mld_version = val; + br->multicast_ctx.multicast_mld_version = val; spin_unlock_bh(&br->multicast_lock); return 0; @@ -4047,6 +4085,7 @@ EXPORT_SYMBOL_GPL(br_multicast_has_querier_anywhere); */ bool br_multicast_has_querier_adjacent(struct net_device *dev, int proto) { + struct net_bridge_mcast *brmctx; struct net_bridge *br; struct net_bridge_port *port; bool ret = false; @@ -4060,17 +4099,18 @@ bool br_multicast_has_querier_adjacent(struct net_device *dev, int proto) goto unlock; br = port->br; + brmctx = &br->multicast_ctx; switch (proto) { case ETH_P_IP: - if (!timer_pending(&br->ip4_other_query.timer) || - rcu_dereference(br->ip4_querier.port) == port) + if (!timer_pending(&brmctx->ip4_other_query.timer) || + rcu_dereference(brmctx->ip4_querier.port) == port) goto unlock; break; #if IS_ENABLED(CONFIG_IPV6) case ETH_P_IPV6: - if (!timer_pending(&br->ip6_other_query.timer) || - rcu_dereference(br->ip6_querier.port) == port) + if (!timer_pending(&brmctx->ip6_other_query.timer) || + rcu_dereference(brmctx->ip6_querier.port) == port) goto unlock; break; #endif @@ -4097,6 +4137,7 @@ EXPORT_SYMBOL_GPL(br_multicast_has_querier_adjacent); bool br_multicast_has_router_adjacent(struct net_device *dev, int proto) { struct net_bridge_mcast_port *pmctx; + struct net_bridge_mcast *brmctx; struct net_bridge_port *port; bool ret = false; @@ -4105,9 +4146,10 @@ bool br_multicast_has_router_adjacent(struct net_device *dev, int proto) if (!port) goto unlock; + brmctx = &port->br->multicast_ctx; switch (proto) { case ETH_P_IP: - hlist_for_each_entry_rcu(pmctx, &port->br->ip4_mc_router_list, + hlist_for_each_entry_rcu(pmctx, &brmctx->ip4_mc_router_list, ip4_rlist) { if (pmctx->port == port) continue; @@ -4118,7 +4160,7 @@ bool br_multicast_has_router_adjacent(struct net_device *dev, int proto) break; #if IS_ENABLED(CONFIG_IPV6) case ETH_P_IPV6: - hlist_for_each_entry_rcu(pmctx, &port->br->ip6_mc_router_list, + hlist_for_each_entry_rcu(pmctx, &brmctx->ip6_mc_router_list, ip6_rlist) { if (pmctx->port == port) continue; diff --git a/net/bridge/br_netlink.c b/net/bridge/br_netlink.c index 0cbe5826cfe8..3d5860e41084 100644 --- a/net/bridge/br_netlink.c +++ b/net/bridge/br_netlink.c @@ -1387,49 +1387,49 @@ static int br_changelink(struct net_device *brdev, struct nlattr *tb[], if (data[IFLA_BR_MCAST_LAST_MEMBER_CNT]) { u32 val = nla_get_u32(data[IFLA_BR_MCAST_LAST_MEMBER_CNT]); - br->multicast_last_member_count = val; + br->multicast_ctx.multicast_last_member_count = val; } if (data[IFLA_BR_MCAST_STARTUP_QUERY_CNT]) { u32 val = nla_get_u32(data[IFLA_BR_MCAST_STARTUP_QUERY_CNT]); - br->multicast_startup_query_count = val; + br->multicast_ctx.multicast_startup_query_count = val; } if (data[IFLA_BR_MCAST_LAST_MEMBER_INTVL]) { u64 val = nla_get_u64(data[IFLA_BR_MCAST_LAST_MEMBER_INTVL]); - br->multicast_last_member_interval = clock_t_to_jiffies(val); + br->multicast_ctx.multicast_last_member_interval = clock_t_to_jiffies(val); } if (data[IFLA_BR_MCAST_MEMBERSHIP_INTVL]) { u64 val = nla_get_u64(data[IFLA_BR_MCAST_MEMBERSHIP_INTVL]); - br->multicast_membership_interval = clock_t_to_jiffies(val); + br->multicast_ctx.multicast_membership_interval = clock_t_to_jiffies(val); } if (data[IFLA_BR_MCAST_QUERIER_INTVL]) { u64 val = nla_get_u64(data[IFLA_BR_MCAST_QUERIER_INTVL]); - br->multicast_querier_interval = clock_t_to_jiffies(val); + br->multicast_ctx.multicast_querier_interval = clock_t_to_jiffies(val); } if (data[IFLA_BR_MCAST_QUERY_INTVL]) { u64 val = nla_get_u64(data[IFLA_BR_MCAST_QUERY_INTVL]); - br->multicast_query_interval = clock_t_to_jiffies(val); + br->multicast_ctx.multicast_query_interval = clock_t_to_jiffies(val); } if (data[IFLA_BR_MCAST_QUERY_RESPONSE_INTVL]) { u64 val = nla_get_u64(data[IFLA_BR_MCAST_QUERY_RESPONSE_INTVL]); - br->multicast_query_response_interval = clock_t_to_jiffies(val); + br->multicast_ctx.multicast_query_response_interval = clock_t_to_jiffies(val); } if (data[IFLA_BR_MCAST_STARTUP_QUERY_INTVL]) { u64 val = nla_get_u64(data[IFLA_BR_MCAST_STARTUP_QUERY_INTVL]); - br->multicast_startup_query_interval = clock_t_to_jiffies(val); + br->multicast_ctx.multicast_startup_query_interval = clock_t_to_jiffies(val); } if (data[IFLA_BR_MCAST_STATS_ENABLED]) { @@ -1636,7 +1636,8 @@ static int br_fill_info(struct sk_buff *skb, const struct net_device *brdev) return -EMSGSIZE; #endif #ifdef CONFIG_BRIDGE_IGMP_SNOOPING - if (nla_put_u8(skb, IFLA_BR_MCAST_ROUTER, br->multicast_router) || + if (nla_put_u8(skb, IFLA_BR_MCAST_ROUTER, + br->multicast_ctx.multicast_router) || nla_put_u8(skb, IFLA_BR_MCAST_SNOOPING, br_opt_get(br, BROPT_MULTICAST_ENABLED)) || nla_put_u8(skb, IFLA_BR_MCAST_QUERY_USE_IFADDR, @@ -1648,38 +1649,38 @@ static int br_fill_info(struct sk_buff *skb, const struct net_device *brdev) nla_put_u32(skb, IFLA_BR_MCAST_HASH_ELASTICITY, RHT_ELASTICITY) || nla_put_u32(skb, IFLA_BR_MCAST_HASH_MAX, br->hash_max) || nla_put_u32(skb, IFLA_BR_MCAST_LAST_MEMBER_CNT, - br->multicast_last_member_count) || + br->multicast_ctx.multicast_last_member_count) || nla_put_u32(skb, IFLA_BR_MCAST_STARTUP_QUERY_CNT, - br->multicast_startup_query_count) || + br->multicast_ctx.multicast_startup_query_count) || nla_put_u8(skb, IFLA_BR_MCAST_IGMP_VERSION, - br->multicast_igmp_version)) + br->multicast_ctx.multicast_igmp_version)) return -EMSGSIZE; #if IS_ENABLED(CONFIG_IPV6) if (nla_put_u8(skb, IFLA_BR_MCAST_MLD_VERSION, - br->multicast_mld_version)) + br->multicast_ctx.multicast_mld_version)) return -EMSGSIZE; #endif - clockval = jiffies_to_clock_t(br->multicast_last_member_interval); + clockval = jiffies_to_clock_t(br->multicast_ctx.multicast_last_member_interval); if (nla_put_u64_64bit(skb, IFLA_BR_MCAST_LAST_MEMBER_INTVL, clockval, IFLA_BR_PAD)) return -EMSGSIZE; - clockval = jiffies_to_clock_t(br->multicast_membership_interval); + clockval = jiffies_to_clock_t(br->multicast_ctx.multicast_membership_interval); if (nla_put_u64_64bit(skb, IFLA_BR_MCAST_MEMBERSHIP_INTVL, clockval, IFLA_BR_PAD)) return -EMSGSIZE; - clockval = jiffies_to_clock_t(br->multicast_querier_interval); + clockval = jiffies_to_clock_t(br->multicast_ctx.multicast_querier_interval); if (nla_put_u64_64bit(skb, IFLA_BR_MCAST_QUERIER_INTVL, clockval, IFLA_BR_PAD)) return -EMSGSIZE; - clockval = jiffies_to_clock_t(br->multicast_query_interval); + clockval = jiffies_to_clock_t(br->multicast_ctx.multicast_query_interval); if (nla_put_u64_64bit(skb, IFLA_BR_MCAST_QUERY_INTVL, clockval, IFLA_BR_PAD)) return -EMSGSIZE; - clockval = jiffies_to_clock_t(br->multicast_query_response_interval); + clockval = jiffies_to_clock_t(br->multicast_ctx.multicast_query_response_interval); if (nla_put_u64_64bit(skb, IFLA_BR_MCAST_QUERY_RESPONSE_INTVL, clockval, IFLA_BR_PAD)) return -EMSGSIZE; - clockval = jiffies_to_clock_t(br->multicast_startup_query_interval); + clockval = jiffies_to_clock_t(br->multicast_ctx.multicast_startup_query_interval); if (nla_put_u64_64bit(skb, IFLA_BR_MCAST_STARTUP_QUERY_INTVL, clockval, IFLA_BR_PAD)) return -EMSGSIZE; diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h index 61034fd8a3bd..badd490fce7a 100644 --- a/net/bridge/br_private.h +++ b/net/bridge/br_private.h @@ -106,6 +106,40 @@ struct net_bridge_mcast_port { #endif /* CONFIG_BRIDGE_IGMP_SNOOPING */ }; +/* net_bridge_mcast must be always defined due to forwarding stubs */ +struct net_bridge_mcast { +#ifdef CONFIG_BRIDGE_IGMP_SNOOPING + struct net_bridge *br; + + u32 multicast_last_member_count; + u32 multicast_startup_query_count; + + u8 multicast_igmp_version; + u8 multicast_router; +#if IS_ENABLED(CONFIG_IPV6) + u8 multicast_mld_version; +#endif + unsigned long multicast_last_member_interval; + unsigned long multicast_membership_interval; + unsigned long multicast_querier_interval; + unsigned long multicast_query_interval; + unsigned long multicast_query_response_interval; + unsigned long multicast_startup_query_interval; + struct hlist_head ip4_mc_router_list; + struct timer_list ip4_mc_router_timer; + struct bridge_mcast_other_query ip4_other_query; + struct bridge_mcast_own_query ip4_own_query; + struct bridge_mcast_querier ip4_querier; +#if IS_ENABLED(CONFIG_IPV6) + struct hlist_head ip6_mc_router_list; + struct timer_list ip6_mc_router_timer; + struct bridge_mcast_other_query ip6_other_query; + struct bridge_mcast_own_query ip6_own_query; + struct bridge_mcast_querier ip6_querier; +#endif /* IS_ENABLED(CONFIG_IPV6) */ +#endif /* CONFIG_BRIDGE_IGMP_SNOOPING */ +}; + struct br_tunnel_info { __be64 tunnel_id; struct metadata_dst __rcu *tunnel_dst; @@ -445,25 +479,14 @@ struct net_bridge { BR_USER_STP, /* new RSTP in userspace */ } stp_enabled; + struct net_bridge_mcast multicast_ctx; + #ifdef CONFIG_BRIDGE_IGMP_SNOOPING + struct bridge_mcast_stats __percpu *mcast_stats; u32 hash_max; - u32 multicast_last_member_count; - u32 multicast_startup_query_count; - - u8 multicast_igmp_version; - u8 multicast_router; -#if IS_ENABLED(CONFIG_IPV6) - u8 multicast_mld_version; -#endif spinlock_t multicast_lock; - unsigned long multicast_last_member_interval; - unsigned long multicast_membership_interval; - unsigned long multicast_querier_interval; - unsigned long multicast_query_interval; - unsigned long multicast_query_response_interval; - unsigned long multicast_startup_query_interval; struct rhashtable mdb_hash_tbl; struct rhashtable sg_port_tbl; @@ -471,19 +494,6 @@ struct net_bridge { struct hlist_head mcast_gc_list; struct hlist_head mdb_list; - struct hlist_head ip4_mc_router_list; - struct timer_list ip4_mc_router_timer; - struct bridge_mcast_other_query ip4_other_query; - struct bridge_mcast_own_query ip4_own_query; - struct bridge_mcast_querier ip4_querier; - struct bridge_mcast_stats __percpu *mcast_stats; -#if IS_ENABLED(CONFIG_IPV6) - struct hlist_head ip6_mc_router_list; - struct timer_list ip6_mc_router_timer; - struct bridge_mcast_other_query ip6_other_query; - struct bridge_mcast_own_query ip6_own_query; - struct bridge_mcast_querier ip6_querier; -#endif /* IS_ENABLED(CONFIG_IPV6) */ struct work_struct mcast_gc_work; #endif @@ -893,16 +903,20 @@ static inline bool br_group_is_l2(const struct br_ip *group) rcu_dereference_protected(X, lockdep_is_held(&br->multicast_lock)) static inline struct hlist_node * -br_multicast_get_first_rport_node(struct net_bridge *b, struct sk_buff *skb) { +br_multicast_get_first_rport_node(struct net_bridge *br, struct sk_buff *skb) +{ + struct net_bridge_mcast *brmctx = &br->multicast_ctx; + #if IS_ENABLED(CONFIG_IPV6) if (skb->protocol == htons(ETH_P_IPV6)) - return rcu_dereference(hlist_first_rcu(&b->ip6_mc_router_list)); + return rcu_dereference(hlist_first_rcu(&brmctx->ip6_mc_router_list)); #endif - return rcu_dereference(hlist_first_rcu(&b->ip4_mc_router_list)); + return rcu_dereference(hlist_first_rcu(&brmctx->ip4_mc_router_list)); } static inline struct net_bridge_port * -br_multicast_rport_from_node_skb(struct hlist_node *rp, struct sk_buff *skb) { +br_multicast_rport_from_node_skb(struct hlist_node *rp, struct sk_buff *skb) +{ struct net_bridge_mcast_port *mctx; #if IS_ENABLED(CONFIG_IPV6) @@ -920,15 +934,15 @@ br_multicast_rport_from_node_skb(struct hlist_node *rp, struct sk_buff *skb) { return NULL; } -static inline bool br_ip4_multicast_is_router(struct net_bridge *br) +static inline bool br_ip4_multicast_is_router(struct net_bridge_mcast *brmctx) { - return timer_pending(&br->ip4_mc_router_timer); + return timer_pending(&brmctx->ip4_mc_router_timer); } -static inline bool br_ip6_multicast_is_router(struct net_bridge *br) +static inline bool br_ip6_multicast_is_router(struct net_bridge_mcast *brmctx) { #if IS_ENABLED(CONFIG_IPV6) - return timer_pending(&br->ip6_mc_router_timer); + return timer_pending(&brmctx->ip6_mc_router_timer); #else return false; #endif @@ -937,18 +951,20 @@ static inline bool br_ip6_multicast_is_router(struct net_bridge *br) static inline bool br_multicast_is_router(struct net_bridge *br, struct sk_buff *skb) { - switch (br->multicast_router) { + struct net_bridge_mcast *brmctx = &br->multicast_ctx; + + switch (brmctx->multicast_router) { case MDB_RTR_TYPE_PERM: return true; case MDB_RTR_TYPE_TEMP_QUERY: if (skb) { if (skb->protocol == htons(ETH_P_IP)) - return br_ip4_multicast_is_router(br); + return br_ip4_multicast_is_router(brmctx); else if (skb->protocol == htons(ETH_P_IPV6)) - return br_ip6_multicast_is_router(br); + return br_ip6_multicast_is_router(brmctx); } else { - return br_ip4_multicast_is_router(br) || - br_ip6_multicast_is_router(br); + return br_ip4_multicast_is_router(brmctx) || + br_ip6_multicast_is_router(brmctx); } fallthrough; default: @@ -983,11 +999,11 @@ static inline bool br_multicast_querier_exists(struct net_bridge *br, switch (eth->h_proto) { case (htons(ETH_P_IP)): return __br_multicast_querier_exists(br, - &br->ip4_other_query, false); + &br->multicast_ctx.ip4_other_query, false); #if IS_ENABLED(CONFIG_IPV6) case (htons(ETH_P_IPV6)): return __br_multicast_querier_exists(br, - &br->ip6_other_query, true); + &br->multicast_ctx.ip6_other_query, true); #endif default: return !!mdb && br_group_is_l2(&mdb->addr); @@ -1013,10 +1029,10 @@ static inline bool br_multicast_should_handle_mode(const struct net_bridge *br, { switch (proto) { case htons(ETH_P_IP): - return !!(br->multicast_igmp_version == 3); + return !!(br->multicast_ctx.multicast_igmp_version == 3); #if IS_ENABLED(CONFIG_IPV6) case htons(ETH_P_IPV6): - return !!(br->multicast_mld_version == 2); + return !!(br->multicast_ctx.multicast_mld_version == 2); #endif default: return false; @@ -1030,15 +1046,15 @@ static inline int br_multicast_igmp_type(const struct sk_buff *skb) static inline unsigned long br_multicast_lmqt(const struct net_bridge *br) { - return br->multicast_last_member_interval * - br->multicast_last_member_count; + return br->multicast_ctx.multicast_last_member_interval * + br->multicast_ctx.multicast_last_member_count; } static inline unsigned long br_multicast_gmi(const struct net_bridge *br) { /* use the RFC default of 2 for QRV */ - return 2 * br->multicast_query_interval + - br->multicast_query_response_interval; + return 2 * br->multicast_ctx.multicast_query_interval + + br->multicast_ctx.multicast_query_response_interval; } #else static inline int br_multicast_rcv(struct net_bridge *br, diff --git a/net/bridge/br_sysfs_br.c b/net/bridge/br_sysfs_br.c index 381467b691d5..953d544663d5 100644 --- a/net/bridge/br_sysfs_br.c +++ b/net/bridge/br_sysfs_br.c @@ -384,7 +384,7 @@ static ssize_t multicast_router_show(struct device *d, struct device_attribute *attr, char *buf) { struct net_bridge *br = to_bridge(d); - return sprintf(buf, "%d\n", br->multicast_router); + return sprintf(buf, "%d\n", br->multicast_ctx.multicast_router); } static int set_multicast_router(struct net_bridge *br, unsigned long val, @@ -514,7 +514,7 @@ static ssize_t multicast_igmp_version_show(struct device *d, { struct net_bridge *br = to_bridge(d); - return sprintf(buf, "%u\n", br->multicast_igmp_version); + return sprintf(buf, "%u\n", br->multicast_ctx.multicast_igmp_version); } static int set_multicast_igmp_version(struct net_bridge *br, unsigned long val, @@ -536,13 +536,13 @@ static ssize_t multicast_last_member_count_show(struct device *d, char *buf) { struct net_bridge *br = to_bridge(d); - return sprintf(buf, "%u\n", br->multicast_last_member_count); + return sprintf(buf, "%u\n", br->multicast_ctx.multicast_last_member_count); } static int set_last_member_count(struct net_bridge *br, unsigned long val, struct netlink_ext_ack *extack) { - br->multicast_last_member_count = val; + br->multicast_ctx.multicast_last_member_count = val; return 0; } @@ -558,13 +558,13 @@ static ssize_t multicast_startup_query_count_show( struct device *d, struct device_attribute *attr, char *buf) { struct net_bridge *br = to_bridge(d); - return sprintf(buf, "%u\n", br->multicast_startup_query_count); + return sprintf(buf, "%u\n", br->multicast_ctx.multicast_startup_query_count); } static int set_startup_query_count(struct net_bridge *br, unsigned long val, struct netlink_ext_ack *extack) { - br->multicast_startup_query_count = val; + br->multicast_ctx.multicast_startup_query_count = val; return 0; } @@ -581,13 +581,13 @@ static ssize_t multicast_last_member_interval_show( { struct net_bridge *br = to_bridge(d); return sprintf(buf, "%lu\n", - jiffies_to_clock_t(br->multicast_last_member_interval)); + jiffies_to_clock_t(br->multicast_ctx.multicast_last_member_interval)); } static int set_last_member_interval(struct net_bridge *br, unsigned long val, struct netlink_ext_ack *extack) { - br->multicast_last_member_interval = clock_t_to_jiffies(val); + br->multicast_ctx.multicast_last_member_interval = clock_t_to_jiffies(val); return 0; } @@ -604,13 +604,13 @@ static ssize_t multicast_membership_interval_show( { struct net_bridge *br = to_bridge(d); return sprintf(buf, "%lu\n", - jiffies_to_clock_t(br->multicast_membership_interval)); + jiffies_to_clock_t(br->multicast_ctx.multicast_membership_interval)); } static int set_membership_interval(struct net_bridge *br, unsigned long val, struct netlink_ext_ack *extack) { - br->multicast_membership_interval = clock_t_to_jiffies(val); + br->multicast_ctx.multicast_membership_interval = clock_t_to_jiffies(val); return 0; } @@ -628,13 +628,13 @@ static ssize_t multicast_querier_interval_show(struct device *d, { struct net_bridge *br = to_bridge(d); return sprintf(buf, "%lu\n", - jiffies_to_clock_t(br->multicast_querier_interval)); + jiffies_to_clock_t(br->multicast_ctx.multicast_querier_interval)); } static int set_querier_interval(struct net_bridge *br, unsigned long val, struct netlink_ext_ack *extack) { - br->multicast_querier_interval = clock_t_to_jiffies(val); + br->multicast_ctx.multicast_querier_interval = clock_t_to_jiffies(val); return 0; } @@ -652,13 +652,13 @@ static ssize_t multicast_query_interval_show(struct device *d, { struct net_bridge *br = to_bridge(d); return sprintf(buf, "%lu\n", - jiffies_to_clock_t(br->multicast_query_interval)); + jiffies_to_clock_t(br->multicast_ctx.multicast_query_interval)); } static int set_query_interval(struct net_bridge *br, unsigned long val, struct netlink_ext_ack *extack) { - br->multicast_query_interval = clock_t_to_jiffies(val); + br->multicast_ctx.multicast_query_interval = clock_t_to_jiffies(val); return 0; } @@ -676,13 +676,13 @@ static ssize_t multicast_query_response_interval_show( struct net_bridge *br = to_bridge(d); return sprintf( buf, "%lu\n", - jiffies_to_clock_t(br->multicast_query_response_interval)); + jiffies_to_clock_t(br->multicast_ctx.multicast_query_response_interval)); } static int set_query_response_interval(struct net_bridge *br, unsigned long val, struct netlink_ext_ack *extack) { - br->multicast_query_response_interval = clock_t_to_jiffies(val); + br->multicast_ctx.multicast_query_response_interval = clock_t_to_jiffies(val); return 0; } @@ -700,13 +700,13 @@ static ssize_t multicast_startup_query_interval_show( struct net_bridge *br = to_bridge(d); return sprintf( buf, "%lu\n", - jiffies_to_clock_t(br->multicast_startup_query_interval)); + jiffies_to_clock_t(br->multicast_ctx.multicast_startup_query_interval)); } static int set_startup_query_interval(struct net_bridge *br, unsigned long val, struct netlink_ext_ack *extack) { - br->multicast_startup_query_interval = clock_t_to_jiffies(val); + br->multicast_ctx.multicast_startup_query_interval = clock_t_to_jiffies(val); return 0; } @@ -751,7 +751,7 @@ static ssize_t multicast_mld_version_show(struct device *d, { struct net_bridge *br = to_bridge(d); - return sprintf(buf, "%u\n", br->multicast_mld_version); + return sprintf(buf, "%u\n", br->multicast_ctx.multicast_mld_version); } static int set_multicast_mld_version(struct net_bridge *br, unsigned long val, From patchwork Mon Jul 19 17:06:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nikolay Aleksandrov X-Patchwork-Id: 12386375 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C93B0C6377E for ; Mon, 19 Jul 2021 17:22:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A94C261181 for ; Mon, 19 Jul 2021 17:22:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1355081AbhGSQja (ORCPT ); Mon, 19 Jul 2021 12:39:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36056 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1355622AbhGSQg1 (ORCPT ); Mon, 19 Jul 2021 12:36:27 -0400 Received: from mail-ed1-x52f.google.com (mail-ed1-x52f.google.com [IPv6:2a00:1450:4864:20::52f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8B79AC04F96B for ; Mon, 19 Jul 2021 09:49:47 -0700 (PDT) Received: by mail-ed1-x52f.google.com with SMTP id h8so25012258eds.4 for ; Mon, 19 Jul 2021 10:10:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=blackwall-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=d5WfUxiu3wUTfpLu8xCFiHnRwiszQuYf+ZkVtEOg5n0=; b=hRIKgrnSl9aT+0VxkaCzdv0ZAbLvjy0UoszHK01x3cNB7wBopy36yRxUVhLTtjG3C9 1vdHw7WvCCS2CoogwQazt4mZVjBoJ18JuQ3d/mhzQF9SPTdnAE75jl9iHOhQAMGmUmud ZAGw8Sc+h4mZn4G3oYMAJsmpKTMmOfllQms7ZSRfjSri/0QvAPm3j841wkjzDnEQwt3H E3X3eUdyHuZighJBVU7ydRPl9T/3+DZkvaD1kuXAS2iKt9JOSkRDmxONiw4RC+1d7gJz 7u6JD0luUoHaGbz03enYJrJOJp/LuorBlrYB4Pe7fbd8rrFMtcrGu+ETLl/DJZvpxaZa fqZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=d5WfUxiu3wUTfpLu8xCFiHnRwiszQuYf+ZkVtEOg5n0=; b=Xq7GwpeRJWcZvSZAfgvnLSdXZqyRwpmIuvVSr1aEnt7s1/mL6I6aVeFLNDB98rJsYv 0xh11Zz3fA936jrFajKKmMBcTaL3R1kdnkePWEdqD2PNCNAZuNyXRpbV8CBwjJS2/AtB k5/Q+d9ihJQefza8XTHnOfKz5cXMsC1lfPrFOXclHQ6HGSWOyVsLY2mrp4BQjw0Yob61 36R4t6HwaJo2u4Agf8ixR28R/QY/qFl9ivjPS5F1zv1xUe8+BMQkI9b3nlNYw8rx6woG KI6dZD3kohzUwNBTg13FzM3j3/z6vqWlg0Kb7ozGsfTq6ZX0rrd5rLHIsrN6EGr4iGl8 XCQw== X-Gm-Message-State: AOAM530IdpsKZh5XH6m8xxIyBNgIb4oNLHEsqBOeF8QYvPcJsrN4Ozu0 S6wPHHndyvhqvcdbq2tikztofYR3S/d0fiAnr5g= X-Google-Smtp-Source: ABdhPJwvt4rL/k0XkiPRDbJyuHTov0m9SN1RhN4fyOAZSEX/5f7H1/4Dp7ijwCilQqB7izAJzJT1ug== X-Received: by 2002:a05:6402:26ca:: with SMTP id x10mr35602056edd.319.1626714599619; Mon, 19 Jul 2021 10:09:59 -0700 (PDT) Received: from debil.vdiclient.nvidia.com (84-238-136-197.ip.btc-net.bg. [84.238.136.197]) by smtp.gmail.com with ESMTPSA id nc29sm6073896ejc.10.2021.07.19.10.09.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Jul 2021 10:09:58 -0700 (PDT) From: Nikolay Aleksandrov To: netdev@vger.kernel.org Cc: roopa@nvidia.com, bridge@lists.linux-foundation.org, Nikolay Aleksandrov Subject: [PATCH net-next 03/15] net: bridge: multicast: use multicast contexts instead of bridge or port Date: Mon, 19 Jul 2021 20:06:25 +0300 Message-Id: <20210719170637.435541-4-razor@blackwall.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210719170637.435541-1-razor@blackwall.org> References: <20210719170637.435541-1-razor@blackwall.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Nikolay Aleksandrov Pass multicast context pointers to multicast functions instead of bridge/port. This would make it easier later to switch these contexts to their per-vlan versions. The patch is basically search and replace, no functional changes. Signed-off-by: Nikolay Aleksandrov --- net/bridge/br_device.c | 9 +- net/bridge/br_forward.c | 7 +- net/bridge/br_input.c | 14 +- net/bridge/br_mdb.c | 2 +- net/bridge/br_multicast.c | 889 ++++++++++++++++-------------- net/bridge/br_multicast_eht.c | 92 ++-- net/bridge/br_private.h | 74 +-- net/bridge/br_private_mcast_eht.h | 3 +- 8 files changed, 575 insertions(+), 515 deletions(-) diff --git a/net/bridge/br_device.c b/net/bridge/br_device.c index e8b626cc6bfd..e815bf4f9f24 100644 --- a/net/bridge/br_device.c +++ b/net/bridge/br_device.c @@ -28,6 +28,7 @@ EXPORT_SYMBOL_GPL(nf_br_ops); netdev_tx_t br_dev_xmit(struct sk_buff *skb, struct net_device *dev) { struct net_bridge *br = netdev_priv(dev); + struct net_bridge_mcast *brmctx = &br->multicast_ctx; struct net_bridge_fdb_entry *dst; struct net_bridge_mdb_entry *mdst; const struct nf_br_ops *nf_ops; @@ -82,15 +83,15 @@ netdev_tx_t br_dev_xmit(struct sk_buff *skb, struct net_device *dev) br_flood(br, skb, BR_PKT_MULTICAST, false, true); goto out; } - if (br_multicast_rcv(br, NULL, skb, vid)) { + if (br_multicast_rcv(brmctx, NULL, skb, vid)) { kfree_skb(skb); goto out; } - mdst = br_mdb_get(br, skb, vid); + mdst = br_mdb_get(brmctx, skb, vid); if ((mdst || BR_INPUT_SKB_CB_MROUTERS_ONLY(skb)) && - br_multicast_querier_exists(br, eth_hdr(skb), mdst)) - br_multicast_flood(mdst, skb, false, true); + br_multicast_querier_exists(brmctx, eth_hdr(skb), mdst)) + br_multicast_flood(mdst, skb, brmctx, false, true); else br_flood(br, skb, BR_PKT_MULTICAST, false, true); } else if ((dst = br_fdb_find_rcu(br, dest, vid)) != NULL) { diff --git a/net/bridge/br_forward.c b/net/bridge/br_forward.c index 07856362538f..bfdbaf3015b9 100644 --- a/net/bridge/br_forward.c +++ b/net/bridge/br_forward.c @@ -267,20 +267,19 @@ static void maybe_deliver_addr(struct net_bridge_port *p, struct sk_buff *skb, /* called with rcu_read_lock */ void br_multicast_flood(struct net_bridge_mdb_entry *mdst, struct sk_buff *skb, + struct net_bridge_mcast *brmctx, bool local_rcv, bool local_orig) { - struct net_device *dev = BR_INPUT_SKB_CB(skb)->brdev; - struct net_bridge *br = netdev_priv(dev); struct net_bridge_port *prev = NULL; struct net_bridge_port_group *p; bool allow_mode_include = true; struct hlist_node *rp; - rp = br_multicast_get_first_rport_node(br, skb); + rp = br_multicast_get_first_rport_node(brmctx, skb); if (mdst) { p = rcu_dereference(mdst->ports); - if (br_multicast_should_handle_mode(br, mdst->addr.proto) && + if (br_multicast_should_handle_mode(brmctx, mdst->addr.proto) && br_multicast_is_star_g(&mdst->addr)) allow_mode_include = false; } else { diff --git a/net/bridge/br_input.c b/net/bridge/br_input.c index 1f506309efa8..bb2036dd4934 100644 --- a/net/bridge/br_input.c +++ b/net/bridge/br_input.c @@ -69,8 +69,10 @@ int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb struct net_bridge_port *p = br_port_get_rcu(skb->dev); enum br_pkt_type pkt_type = BR_PKT_UNICAST; struct net_bridge_fdb_entry *dst = NULL; + struct net_bridge_mcast_port *pmctx; struct net_bridge_mdb_entry *mdst; bool local_rcv, mcast_hit = false; + struct net_bridge_mcast *brmctx; struct net_bridge *br; u16 vid = 0; u8 state; @@ -78,6 +80,8 @@ int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb if (!p || p->state == BR_STATE_DISABLED) goto drop; + brmctx = &p->br->multicast_ctx; + pmctx = &p->multicast_ctx; state = p->state; if (!br_allowed_ingress(p->br, nbp_vlan_group_rcu(p), skb, &vid, &state)) @@ -98,7 +102,7 @@ int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb local_rcv = true; } else { pkt_type = BR_PKT_MULTICAST; - if (br_multicast_rcv(br, p, skb, vid)) + if (br_multicast_rcv(brmctx, pmctx, skb, vid)) goto drop; } } @@ -128,11 +132,11 @@ int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb switch (pkt_type) { case BR_PKT_MULTICAST: - mdst = br_mdb_get(br, skb, vid); + mdst = br_mdb_get(brmctx, skb, vid); if ((mdst || BR_INPUT_SKB_CB_MROUTERS_ONLY(skb)) && - br_multicast_querier_exists(br, eth_hdr(skb), mdst)) { + br_multicast_querier_exists(brmctx, eth_hdr(skb), mdst)) { if ((mdst && mdst->host_joined) || - br_multicast_is_router(br, skb)) { + br_multicast_is_router(brmctx, skb)) { local_rcv = true; br->dev->stats.multicast++; } @@ -162,7 +166,7 @@ int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb if (!mcast_hit) br_flood(br, skb, pkt_type, local_rcv, false); else - br_multicast_flood(mdst, skb, local_rcv, false); + br_multicast_flood(mdst, skb, brmctx, local_rcv, false); } if (local_rcv) diff --git a/net/bridge/br_mdb.c b/net/bridge/br_mdb.c index effe03c08038..5319587198eb 100644 --- a/net/bridge/br_mdb.c +++ b/net/bridge/br_mdb.c @@ -1092,7 +1092,7 @@ static int br_mdb_add_group(struct net_bridge *br, struct net_bridge_port *port, * a new INCLUDE port (S,G) then all of *,G EXCLUDE ports need to be * added to it for proper replication */ - if (br_multicast_should_handle_mode(br, group.proto)) { + if (br_multicast_should_handle_mode(&br->multicast_ctx, group.proto)) { switch (filter_mode) { case MCAST_EXCLUDE: br_multicast_star_g_handle_mode(p, MCAST_EXCLUDE); diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c index 92bfc1d95cd5..64145e48a0a5 100644 --- a/net/bridge/br_multicast.c +++ b/net/bridge/br_multicast.c @@ -49,30 +49,30 @@ static const struct rhashtable_params br_sg_port_rht_params = { .automatic_shrinking = true, }; -static void br_multicast_start_querier(struct net_bridge *br, +static void br_multicast_start_querier(struct net_bridge_mcast *brmctx, struct bridge_mcast_own_query *query); -static void br_ip4_multicast_add_router(struct net_bridge *br, - struct net_bridge_port *port); -static void br_ip4_multicast_leave_group(struct net_bridge *br, - struct net_bridge_port *port, +static void br_ip4_multicast_add_router(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx); +static void br_ip4_multicast_leave_group(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, __be32 group, __u16 vid, const unsigned char *src); static void br_multicast_port_group_rexmit(struct timer_list *t); static void -br_multicast_rport_del_notify(struct net_bridge_port *p, bool deleted); -static void br_ip6_multicast_add_router(struct net_bridge *br, - struct net_bridge_port *port); +br_multicast_rport_del_notify(struct net_bridge_mcast_port *pmctx, bool deleted); +static void br_ip6_multicast_add_router(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx); #if IS_ENABLED(CONFIG_IPV6) -static void br_ip6_multicast_leave_group(struct net_bridge *br, - struct net_bridge_port *port, +static void br_ip6_multicast_leave_group(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, const struct in6_addr *group, __u16 vid, const unsigned char *src); #endif static struct net_bridge_port_group * -__br_multicast_add_group(struct net_bridge *br, - struct net_bridge_port *port, +__br_multicast_add_group(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, struct br_ip *group, const unsigned char *src, u8 filter_mode, @@ -140,9 +140,10 @@ static struct net_bridge_mdb_entry *br_mdb_ip6_get(struct net_bridge *br, } #endif -struct net_bridge_mdb_entry *br_mdb_get(struct net_bridge *br, +struct net_bridge_mdb_entry *br_mdb_get(struct net_bridge_mcast *brmctx, struct sk_buff *skb, u16 vid) { + struct net_bridge *br = brmctx->br; struct br_ip ip; if (!br_opt_get(br, BROPT_MULTICAST_ENABLED)) @@ -158,7 +159,7 @@ struct net_bridge_mdb_entry *br_mdb_get(struct net_bridge *br, switch (skb->protocol) { case htons(ETH_P_IP): ip.dst.ip4 = ip_hdr(skb)->daddr; - if (br->multicast_ctx.multicast_igmp_version == 3) { + if (brmctx->multicast_igmp_version == 3) { struct net_bridge_mdb_entry *mdb; ip.src.ip4 = ip_hdr(skb)->saddr; @@ -171,7 +172,7 @@ struct net_bridge_mdb_entry *br_mdb_get(struct net_bridge *br, #if IS_ENABLED(CONFIG_IPV6) case htons(ETH_P_IPV6): ip.dst.ip6 = ipv6_hdr(skb)->daddr; - if (br->multicast_ctx.multicast_mld_version == 2) { + if (brmctx->multicast_mld_version == 2) { struct net_bridge_mdb_entry *mdb; ip.src.ip6 = ipv6_hdr(skb)->saddr; @@ -203,20 +204,23 @@ static bool br_port_group_equal(struct net_bridge_port_group *p, return ether_addr_equal(src, p->eth_addr); } -static void __fwd_add_star_excl(struct net_bridge_port_group *pg, +static void __fwd_add_star_excl(struct net_bridge_mcast_port *pmctx, + struct net_bridge_port_group *pg, struct br_ip *sg_ip) { struct net_bridge_port_group_sg_key sg_key; - struct net_bridge *br = pg->key.port->br; struct net_bridge_port_group *src_pg; + struct net_bridge_mcast *brmctx; memset(&sg_key, 0, sizeof(sg_key)); + brmctx = &pg->key.port->br->multicast_ctx; sg_key.port = pg->key.port; sg_key.addr = *sg_ip; - if (br_sg_port_find(br, &sg_key)) + if (br_sg_port_find(brmctx->br, &sg_key)) return; - src_pg = __br_multicast_add_group(br, pg->key.port, sg_ip, pg->eth_addr, + src_pg = __br_multicast_add_group(brmctx, pmctx, + sg_ip, pg->eth_addr, MCAST_INCLUDE, false, false); if (IS_ERR_OR_NULL(src_pg) || src_pg->rt_protocol != RTPROT_KERNEL) @@ -256,6 +260,7 @@ void br_multicast_star_g_handle_mode(struct net_bridge_port_group *pg, { struct net_bridge *br = pg->key.port->br; struct net_bridge_port_group *pg_lst; + struct net_bridge_mcast_port *pmctx; struct net_bridge_mdb_entry *mp; struct br_ip sg_ip; @@ -265,6 +270,7 @@ void br_multicast_star_g_handle_mode(struct net_bridge_port_group *pg, mp = br_mdb_ip_get(br, &pg->key.addr); if (!mp) return; + pmctx = &pg->key.port->multicast_ctx; memset(&sg_ip, 0, sizeof(sg_ip)); sg_ip = pg->key.addr; @@ -284,7 +290,7 @@ void br_multicast_star_g_handle_mode(struct net_bridge_port_group *pg, __fwd_del_star_excl(pg, &sg_ip); break; case MCAST_EXCLUDE: - __fwd_add_star_excl(pg, &sg_ip); + __fwd_add_star_excl(pmctx, pg, &sg_ip); break; } } @@ -377,7 +383,9 @@ void br_multicast_sg_add_exclude_ports(struct net_bridge_mdb_entry *star_mp, { struct net_bridge_port_group_sg_key sg_key; struct net_bridge *br = star_mp->br; + struct net_bridge_mcast_port *pmctx; struct net_bridge_port_group *pg; + struct net_bridge_mcast *brmctx; if (WARN_ON(br_multicast_is_star_g(&sg->key.addr))) return; @@ -387,6 +395,7 @@ void br_multicast_sg_add_exclude_ports(struct net_bridge_mdb_entry *star_mp, br_multicast_sg_host_state(star_mp, sg); memset(&sg_key, 0, sizeof(sg_key)); sg_key.addr = sg->key.addr; + brmctx = &br->multicast_ctx; /* we need to add all exclude ports to the S,G */ for (pg = mlock_dereference(star_mp->ports, br); pg; @@ -400,7 +409,8 @@ void br_multicast_sg_add_exclude_ports(struct net_bridge_mdb_entry *star_mp, if (br_sg_port_find(br, &sg_key)) continue; - src_pg = __br_multicast_add_group(br, pg->key.port, + pmctx = &pg->key.port->multicast_ctx; + src_pg = __br_multicast_add_group(brmctx, pmctx, &sg->key.addr, sg->eth_addr, MCAST_INCLUDE, false, false); @@ -414,16 +424,21 @@ void br_multicast_sg_add_exclude_ports(struct net_bridge_mdb_entry *star_mp, static void br_multicast_fwd_src_add(struct net_bridge_group_src *src) { struct net_bridge_mdb_entry *star_mp; + struct net_bridge_mcast_port *pmctx; struct net_bridge_port_group *sg; + struct net_bridge_mcast *brmctx; struct br_ip sg_ip; if (src->flags & BR_SGRP_F_INSTALLED) return; memset(&sg_ip, 0, sizeof(sg_ip)); + pmctx = &src->pg->key.port->multicast_ctx; + brmctx = &src->br->multicast_ctx; sg_ip = src->pg->key.addr; sg_ip.src = src->addr.src; - sg = __br_multicast_add_group(src->br, src->pg->key.port, &sg_ip, + + sg = __br_multicast_add_group(brmctx, pmctx, &sg_ip, src->pg->eth_addr, MCAST_INCLUDE, false, !timer_pending(&src->timer)); if (IS_ERR_OR_NULL(sg)) @@ -692,14 +707,13 @@ static void br_multicast_gc(struct hlist_head *head) } } -static struct sk_buff *br_ip4_multicast_alloc_query(struct net_bridge *br, +static struct sk_buff *br_ip4_multicast_alloc_query(struct net_bridge_mcast *brmctx, struct net_bridge_port_group *pg, __be32 ip_dst, __be32 group, bool with_srcs, bool over_lmqt, u8 sflag, u8 *igmp_type, bool *need_rexmit) { - struct net_bridge_mcast *brmctx = &br->multicast_ctx; struct net_bridge_port *p = pg ? pg->key.port : NULL; struct net_bridge_group_src *ent; size_t pkt_size, igmp_hdr_size; @@ -735,10 +749,10 @@ static struct sk_buff *br_ip4_multicast_alloc_query(struct net_bridge *br, pkt_size = sizeof(*eth) + sizeof(*iph) + 4 + igmp_hdr_size; if ((p && pkt_size > p->dev->mtu) || - pkt_size > br->dev->mtu) + pkt_size > brmctx->br->dev->mtu) return NULL; - skb = netdev_alloc_skb_ip_align(br->dev, pkt_size); + skb = netdev_alloc_skb_ip_align(brmctx->br->dev, pkt_size); if (!skb) goto out; @@ -747,7 +761,7 @@ static struct sk_buff *br_ip4_multicast_alloc_query(struct net_bridge *br, skb_reset_mac_header(skb); eth = eth_hdr(skb); - ether_addr_copy(eth->h_source, br->dev->dev_addr); + ether_addr_copy(eth->h_source, brmctx->br->dev->dev_addr); ip_eth_mc_map(ip_dst, eth->h_dest); eth->h_proto = htons(ETH_P_IP); skb_put(skb, sizeof(*eth)); @@ -763,8 +777,8 @@ static struct sk_buff *br_ip4_multicast_alloc_query(struct net_bridge *br, iph->frag_off = htons(IP_DF); iph->ttl = 1; iph->protocol = IPPROTO_IGMP; - iph->saddr = br_opt_get(br, BROPT_MULTICAST_QUERY_USE_IFADDR) ? - inet_select_addr(br->dev, 0, RT_SCOPE_LINK) : 0; + iph->saddr = br_opt_get(brmctx->br, BROPT_MULTICAST_QUERY_USE_IFADDR) ? + inet_select_addr(brmctx->br->dev, 0, RT_SCOPE_LINK) : 0; iph->daddr = ip_dst; ((u8 *)&iph[1])[0] = IPOPT_RA; ((u8 *)&iph[1])[1] = 4; @@ -838,7 +852,7 @@ static struct sk_buff *br_ip4_multicast_alloc_query(struct net_bridge *br, } #if IS_ENABLED(CONFIG_IPV6) -static struct sk_buff *br_ip6_multicast_alloc_query(struct net_bridge *br, +static struct sk_buff *br_ip6_multicast_alloc_query(struct net_bridge_mcast *brmctx, struct net_bridge_port_group *pg, const struct in6_addr *ip6_dst, const struct in6_addr *group, @@ -846,7 +860,6 @@ static struct sk_buff *br_ip6_multicast_alloc_query(struct net_bridge *br, u8 sflag, u8 *igmp_type, bool *need_rexmit) { - struct net_bridge_mcast *brmctx = &br->multicast_ctx; struct net_bridge_port *p = pg ? pg->key.port : NULL; struct net_bridge_group_src *ent; size_t pkt_size, mld_hdr_size; @@ -884,10 +897,10 @@ static struct sk_buff *br_ip6_multicast_alloc_query(struct net_bridge *br, pkt_size = sizeof(*eth) + sizeof(*ip6h) + 8 + mld_hdr_size; if ((p && pkt_size > p->dev->mtu) || - pkt_size > br->dev->mtu) + pkt_size > brmctx->br->dev->mtu) return NULL; - skb = netdev_alloc_skb_ip_align(br->dev, pkt_size); + skb = netdev_alloc_skb_ip_align(brmctx->br->dev, pkt_size); if (!skb) goto out; @@ -897,7 +910,7 @@ static struct sk_buff *br_ip6_multicast_alloc_query(struct net_bridge *br, skb_reset_mac_header(skb); eth = eth_hdr(skb); - ether_addr_copy(eth->h_source, br->dev->dev_addr); + ether_addr_copy(eth->h_source, brmctx->br->dev->dev_addr); eth->h_proto = htons(ETH_P_IPV6); skb_put(skb, sizeof(*eth)); @@ -910,14 +923,14 @@ static struct sk_buff *br_ip6_multicast_alloc_query(struct net_bridge *br, ip6h->nexthdr = IPPROTO_HOPOPTS; ip6h->hop_limit = 1; ip6h->daddr = *ip6_dst; - if (ipv6_dev_get_saddr(dev_net(br->dev), br->dev, &ip6h->daddr, 0, - &ip6h->saddr)) { + if (ipv6_dev_get_saddr(dev_net(brmctx->br->dev), brmctx->br->dev, + &ip6h->daddr, 0, &ip6h->saddr)) { kfree_skb(skb); - br_opt_toggle(br, BROPT_HAS_IPV6_ADDR, false); + br_opt_toggle(brmctx->br, BROPT_HAS_IPV6_ADDR, false); return NULL; } - br_opt_toggle(br, BROPT_HAS_IPV6_ADDR, true); + br_opt_toggle(brmctx->br, BROPT_HAS_IPV6_ADDR, true); ipv6_eth_mc_map(&ip6h->daddr, eth->h_dest); hopopt = (u8 *)(ip6h + 1); @@ -1002,7 +1015,7 @@ static struct sk_buff *br_ip6_multicast_alloc_query(struct net_bridge *br, } #endif -static struct sk_buff *br_multicast_alloc_query(struct net_bridge *br, +static struct sk_buff *br_multicast_alloc_query(struct net_bridge_mcast *brmctx, struct net_bridge_port_group *pg, struct br_ip *ip_dst, struct br_ip *group, @@ -1015,7 +1028,7 @@ static struct sk_buff *br_multicast_alloc_query(struct net_bridge *br, switch (group->proto) { case htons(ETH_P_IP): ip4_dst = ip_dst ? ip_dst->dst.ip4 : htonl(INADDR_ALLHOSTS_GROUP); - return br_ip4_multicast_alloc_query(br, pg, + return br_ip4_multicast_alloc_query(brmctx, pg, ip4_dst, group->dst.ip4, with_srcs, over_lmqt, sflag, igmp_type, @@ -1030,7 +1043,7 @@ static struct sk_buff *br_multicast_alloc_query(struct net_bridge *br, ipv6_addr_set(&ip6_dst, htonl(0xff020000), 0, 0, htonl(1)); - return br_ip6_multicast_alloc_query(br, pg, + return br_ip6_multicast_alloc_query(brmctx, pg, &ip6_dst, &group->dst.ip6, with_srcs, over_lmqt, sflag, igmp_type, @@ -1238,8 +1251,8 @@ void br_multicast_host_leave(struct net_bridge_mdb_entry *mp, bool notify) } static struct net_bridge_port_group * -__br_multicast_add_group(struct net_bridge *br, - struct net_bridge_port *port, +__br_multicast_add_group(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, struct br_ip *group, const unsigned char *src, u8 filter_mode, @@ -1251,29 +1264,29 @@ __br_multicast_add_group(struct net_bridge *br, struct net_bridge_mdb_entry *mp; unsigned long now = jiffies; - if (!netif_running(br->dev) || - (port && port->state == BR_STATE_DISABLED)) + if (!netif_running(brmctx->br->dev) || + (pmctx && pmctx->port->state == BR_STATE_DISABLED)) goto out; - mp = br_multicast_new_group(br, group); + mp = br_multicast_new_group(brmctx->br, group); if (IS_ERR(mp)) return ERR_CAST(mp); - if (!port) { + if (!pmctx) { br_multicast_host_join(mp, true); goto out; } for (pp = &mp->ports; - (p = mlock_dereference(*pp, br)) != NULL; + (p = mlock_dereference(*pp, brmctx->br)) != NULL; pp = &p->next) { - if (br_port_group_equal(p, port, src)) + if (br_port_group_equal(p, pmctx->port, src)) goto found; - if ((unsigned long)p->key.port < (unsigned long)port) + if ((unsigned long)p->key.port < (unsigned long)pmctx->port) break; } - p = br_multicast_new_port_group(port, group, *pp, 0, src, + p = br_multicast_new_port_group(pmctx->port, group, *pp, 0, src, filter_mode, RTPROT_KERNEL); if (unlikely(!p)) { p = ERR_PTR(-ENOMEM); @@ -1282,19 +1295,19 @@ __br_multicast_add_group(struct net_bridge *br, rcu_assign_pointer(*pp, p); if (blocked) p->flags |= MDB_PG_FLAGS_BLOCKED; - br_mdb_notify(br->dev, mp, p, RTM_NEWMDB); + br_mdb_notify(brmctx->br->dev, mp, p, RTM_NEWMDB); found: if (igmpv2_mldv1) mod_timer(&p->timer, - now + br->multicast_ctx.multicast_membership_interval); + now + brmctx->multicast_membership_interval); out: return p; } -static int br_multicast_add_group(struct net_bridge *br, - struct net_bridge_port *port, +static int br_multicast_add_group(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, struct br_ip *group, const unsigned char *src, u8 filter_mode, @@ -1303,18 +1316,18 @@ static int br_multicast_add_group(struct net_bridge *br, struct net_bridge_port_group *pg; int err; - spin_lock(&br->multicast_lock); - pg = __br_multicast_add_group(br, port, group, src, filter_mode, + spin_lock(&brmctx->br->multicast_lock); + pg = __br_multicast_add_group(brmctx, pmctx, group, src, filter_mode, igmpv2_mldv1, false); /* NULL is considered valid for host joined groups */ err = PTR_ERR_OR_ZERO(pg); - spin_unlock(&br->multicast_lock); + spin_unlock(&brmctx->br->multicast_lock); return err; } -static int br_ip4_multicast_add_group(struct net_bridge *br, - struct net_bridge_port *port, +static int br_ip4_multicast_add_group(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, __be32 group, __u16 vid, const unsigned char *src, @@ -1332,13 +1345,13 @@ static int br_ip4_multicast_add_group(struct net_bridge *br, br_group.vid = vid; filter_mode = igmpv2 ? MCAST_EXCLUDE : MCAST_INCLUDE; - return br_multicast_add_group(br, port, &br_group, src, filter_mode, - igmpv2); + return br_multicast_add_group(brmctx, pmctx, &br_group, src, + filter_mode, igmpv2); } #if IS_ENABLED(CONFIG_IPV6) -static int br_ip6_multicast_add_group(struct net_bridge *br, - struct net_bridge_port *port, +static int br_ip6_multicast_add_group(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, const struct in6_addr *group, __u16 vid, const unsigned char *src, @@ -1356,8 +1369,8 @@ static int br_ip6_multicast_add_group(struct net_bridge *br, br_group.vid = vid; filter_mode = mldv1 ? MCAST_EXCLUDE : MCAST_INCLUDE; - return br_multicast_add_group(br, port, &br_group, src, filter_mode, - mldv1); + return br_multicast_add_group(brmctx, pmctx, &br_group, src, + filter_mode, mldv1); } #endif @@ -1370,15 +1383,15 @@ static bool br_multicast_rport_del(struct hlist_node *rlist) return true; } -static bool br_ip4_multicast_rport_del(struct net_bridge_port *p) +static bool br_ip4_multicast_rport_del(struct net_bridge_mcast_port *pmctx) { - return br_multicast_rport_del(&p->multicast_ctx.ip4_rlist); + return br_multicast_rport_del(&pmctx->ip4_rlist); } -static bool br_ip6_multicast_rport_del(struct net_bridge_port *p) +static bool br_ip6_multicast_rport_del(struct net_bridge_mcast_port *pmctx) { #if IS_ENABLED(CONFIG_IPV6) - return br_multicast_rport_del(&p->multicast_ctx.ip6_rlist); + return br_multicast_rport_del(&pmctx->ip6_rlist); #else return false; #endif @@ -1398,7 +1411,7 @@ static void br_multicast_router_expired(struct net_bridge_mcast_port *pmctx, goto out; del = br_multicast_rport_del(rlist); - br_multicast_rport_del_notify(pmctx->port, del); + br_multicast_rport_del_notify(pmctx, del); out: spin_unlock(&br->multicast_lock); } @@ -1475,7 +1488,7 @@ static void br_multicast_querier_expired(struct net_bridge_mcast *brmctx, !br_opt_get(brmctx->br, BROPT_MULTICAST_ENABLED)) goto out; - br_multicast_start_querier(brmctx->br, query); + br_multicast_start_querier(brmctx, query); out: spin_unlock(&brmctx->br->multicast_lock); @@ -1499,12 +1512,10 @@ static void br_ip6_multicast_querier_expired(struct timer_list *t) } #endif -static void br_multicast_select_own_querier(struct net_bridge *br, +static void br_multicast_select_own_querier(struct net_bridge_mcast *brmctx, struct br_ip *ip, struct sk_buff *skb) { - struct net_bridge_mcast *brmctx = &br->multicast_ctx; - if (ip->proto == htons(ETH_P_IP)) brmctx->ip4_querier.addr.src.ip4 = ip_hdr(skb)->saddr; #if IS_ENABLED(CONFIG_IPV6) @@ -1513,8 +1524,8 @@ static void br_multicast_select_own_querier(struct net_bridge *br, #endif } -static void __br_multicast_send_query(struct net_bridge *br, - struct net_bridge_port *port, +static void __br_multicast_send_query(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, struct net_bridge_port_group *pg, struct br_ip *ip_dst, struct br_ip *group, @@ -1527,18 +1538,18 @@ static void __br_multicast_send_query(struct net_bridge *br, u8 igmp_type; again_under_lmqt: - skb = br_multicast_alloc_query(br, pg, ip_dst, group, with_srcs, + skb = br_multicast_alloc_query(brmctx, pg, ip_dst, group, with_srcs, over_lmqt, sflag, &igmp_type, need_rexmit); if (!skb) return; - if (port) { - skb->dev = port->dev; - br_multicast_count(br, port, skb, igmp_type, + if (pmctx) { + skb->dev = pmctx->port->dev; + br_multicast_count(brmctx->br, pmctx->port, skb, igmp_type, BR_MCAST_DIR_TX); NF_HOOK(NFPROTO_BRIDGE, NF_BR_LOCAL_OUT, - dev_net(port->dev), NULL, skb, NULL, skb->dev, + dev_net(pmctx->port->dev), NULL, skb, NULL, skb->dev, br_dev_queue_push_xmit); if (over_lmqt && with_srcs && sflag) { @@ -1546,31 +1557,30 @@ static void __br_multicast_send_query(struct net_bridge *br, goto again_under_lmqt; } } else { - br_multicast_select_own_querier(br, group, skb); - br_multicast_count(br, port, skb, igmp_type, + br_multicast_select_own_querier(brmctx, group, skb); + br_multicast_count(brmctx->br, NULL, skb, igmp_type, BR_MCAST_DIR_RX); netif_rx(skb); } } -static void br_multicast_send_query(struct net_bridge *br, - struct net_bridge_port *port, +static void br_multicast_send_query(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, struct bridge_mcast_own_query *own_query) { - struct net_bridge_mcast *brmctx = &br->multicast_ctx; struct bridge_mcast_other_query *other_query = NULL; struct br_ip br_group; unsigned long time; - if (!netif_running(br->dev) || - !br_opt_get(br, BROPT_MULTICAST_ENABLED) || - !br_opt_get(br, BROPT_MULTICAST_QUERIER)) + if (!netif_running(brmctx->br->dev) || + !br_opt_get(brmctx->br, BROPT_MULTICAST_ENABLED) || + !br_opt_get(brmctx->br, BROPT_MULTICAST_QUERIER)) return; memset(&br_group.dst, 0, sizeof(br_group.dst)); - if (port ? (own_query == &port->multicast_ctx.ip4_own_query) : - (own_query == &brmctx->ip4_own_query)) { + if (pmctx ? (own_query == &pmctx->ip4_own_query) : + (own_query == &brmctx->ip4_own_query)) { other_query = &brmctx->ip4_other_query; br_group.proto = htons(ETH_P_IP); #if IS_ENABLED(CONFIG_IPV6) @@ -1583,8 +1593,8 @@ static void br_multicast_send_query(struct net_bridge *br, if (!other_query || timer_pending(&other_query->timer)) return; - __br_multicast_send_query(br, port, NULL, NULL, &br_group, false, 0, - NULL); + __br_multicast_send_query(brmctx, pmctx, NULL, NULL, &br_group, false, + 0, NULL); time = jiffies; time += own_query->startup_sent < brmctx->multicast_startup_query_count ? @@ -1607,7 +1617,7 @@ br_multicast_port_query_expired(struct net_bridge_mcast_port *pmctx, if (query->startup_sent < br->multicast_ctx.multicast_startup_query_count) query->startup_sent++; - br_multicast_send_query(pmctx->port->br, pmctx->port, query); + br_multicast_send_query(&br->multicast_ctx, pmctx, query); out: spin_unlock(&br->multicast_lock); @@ -1636,7 +1646,8 @@ static void br_multicast_port_group_rexmit(struct timer_list *t) struct net_bridge_port_group *pg = from_timer(pg, t, rexmit_timer); struct bridge_mcast_other_query *other_query = NULL; struct net_bridge *br = pg->key.port->br; - struct net_bridge_mcast *brmctx = &br->multicast_ctx; + struct net_bridge_mcast_port *pmctx; + struct net_bridge_mcast *brmctx; bool need_rexmit = false; spin_lock(&br->multicast_lock); @@ -1645,6 +1656,8 @@ static void br_multicast_port_group_rexmit(struct timer_list *t) !br_opt_get(br, BROPT_MULTICAST_QUERIER)) goto out; + brmctx = &br->multicast_ctx; + pmctx = &pg->key.port->multicast_ctx; if (pg->key.addr.proto == htons(ETH_P_IP)) other_query = &brmctx->ip4_other_query; #if IS_ENABLED(CONFIG_IPV6) @@ -1657,10 +1670,10 @@ static void br_multicast_port_group_rexmit(struct timer_list *t) if (pg->grp_query_rexmit_cnt) { pg->grp_query_rexmit_cnt--; - __br_multicast_send_query(br, pg->key.port, pg, &pg->key.addr, + __br_multicast_send_query(brmctx, pmctx, pg, &pg->key.addr, &pg->key.addr, false, 1, NULL); } - __br_multicast_send_query(br, pg->key.port, pg, &pg->key.addr, + __br_multicast_send_query(brmctx, pmctx, pg, &pg->key.addr, &pg->key.addr, true, 0, &need_rexmit); if (pg->grp_query_rexmit_cnt || need_rexmit) @@ -1756,20 +1769,21 @@ static void br_multicast_enable(struct bridge_mcast_own_query *query) mod_timer(&query->timer, jiffies); } -static void __br_multicast_enable_port(struct net_bridge_port *port) +static void __br_multicast_enable_port_ctx(struct net_bridge_mcast_port *pmctx) { - struct net_bridge *br = port->br; + struct net_bridge *br = pmctx->port->br; + struct net_bridge_mcast *brmctx = &pmctx->port->br->multicast_ctx; if (!br_opt_get(br, BROPT_MULTICAST_ENABLED) || !netif_running(br->dev)) return; - br_multicast_enable(&port->multicast_ctx.ip4_own_query); + br_multicast_enable(&pmctx->ip4_own_query); #if IS_ENABLED(CONFIG_IPV6) - br_multicast_enable(&port->multicast_ctx.ip6_own_query); + br_multicast_enable(&pmctx->ip6_own_query); #endif - if (port->multicast_ctx.multicast_router == MDB_RTR_TYPE_PERM) { - br_ip4_multicast_add_router(br, port); - br_ip6_multicast_add_router(br, port); + if (pmctx->multicast_router == MDB_RTR_TYPE_PERM) { + br_ip4_multicast_add_router(brmctx, pmctx); + br_ip6_multicast_add_router(brmctx, pmctx); } } @@ -1778,12 +1792,13 @@ void br_multicast_enable_port(struct net_bridge_port *port) struct net_bridge *br = port->br; spin_lock(&br->multicast_lock); - __br_multicast_enable_port(port); + __br_multicast_enable_port_ctx(&port->multicast_ctx); spin_unlock(&br->multicast_lock); } void br_multicast_disable_port(struct net_bridge_port *port) { + struct net_bridge_mcast_port *pmctx = &port->multicast_ctx; struct net_bridge *br = port->br; struct net_bridge_port_group *pg; struct hlist_node *n; @@ -1794,15 +1809,15 @@ void br_multicast_disable_port(struct net_bridge_port *port) if (!(pg->flags & MDB_PG_FLAGS_PERMANENT)) br_multicast_find_del_pg(br, pg); - del |= br_ip4_multicast_rport_del(port); - del_timer(&port->multicast_ctx.ip4_mc_router_timer); - del_timer(&port->multicast_ctx.ip4_own_query.timer); - del |= br_ip6_multicast_rport_del(port); + del |= br_ip4_multicast_rport_del(pmctx); + del_timer(&pmctx->ip4_mc_router_timer); + del_timer(&pmctx->ip4_own_query.timer); + del |= br_ip6_multicast_rport_del(pmctx); #if IS_ENABLED(CONFIG_IPV6) - del_timer(&port->multicast_ctx.ip6_mc_router_timer); - del_timer(&port->multicast_ctx.ip6_own_query.timer); + del_timer(&pmctx->ip6_mc_router_timer); + del_timer(&pmctx->ip6_own_query.timer); #endif - br_multicast_rport_del_notify(port, del); + br_multicast_rport_del_notify(pmctx, del); spin_unlock(&br->multicast_lock); } @@ -1828,17 +1843,17 @@ static void __grp_src_mod_timer(struct net_bridge_group_src *src, br_multicast_fwd_src_handle(src); } -static void __grp_src_query_marked_and_rexmit(struct net_bridge_port_group *pg) +static void __grp_src_query_marked_and_rexmit(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, + struct net_bridge_port_group *pg) { struct bridge_mcast_other_query *other_query = NULL; - struct net_bridge *br = pg->key.port->br; - struct net_bridge_mcast *brmctx = &br->multicast_ctx; u32 lmqc = brmctx->multicast_last_member_count; unsigned long lmqt, lmi, now = jiffies; struct net_bridge_group_src *ent; - if (!netif_running(br->dev) || - !br_opt_get(br, BROPT_MULTICAST_ENABLED)) + if (!netif_running(brmctx->br->dev) || + !br_opt_get(brmctx->br, BROPT_MULTICAST_ENABLED)) return; if (pg->key.addr.proto == htons(ETH_P_IP)) @@ -1848,12 +1863,13 @@ static void __grp_src_query_marked_and_rexmit(struct net_bridge_port_group *pg) other_query = &brmctx->ip6_other_query; #endif - lmqt = now + br_multicast_lmqt(br); + lmqt = now + br_multicast_lmqt(brmctx); hlist_for_each_entry(ent, &pg->src_list, node) { if (ent->flags & BR_SGRP_F_SEND) { ent->flags &= ~BR_SGRP_F_SEND; if (ent->timer.expires > lmqt) { - if (br_opt_get(br, BROPT_MULTICAST_QUERIER) && + if (br_opt_get(brmctx->br, + BROPT_MULTICAST_QUERIER) && other_query && !timer_pending(&other_query->timer)) ent->src_query_rexmit_cnt = lmqc; @@ -1862,11 +1878,11 @@ static void __grp_src_query_marked_and_rexmit(struct net_bridge_port_group *pg) } } - if (!br_opt_get(br, BROPT_MULTICAST_QUERIER) || + if (!br_opt_get(brmctx->br, BROPT_MULTICAST_QUERIER) || !other_query || timer_pending(&other_query->timer)) return; - __br_multicast_send_query(br, pg->key.port, pg, &pg->key.addr, + __br_multicast_send_query(brmctx, pmctx, pg, &pg->key.addr, &pg->key.addr, true, 1, NULL); lmi = now + brmctx->multicast_last_member_interval; @@ -1875,15 +1891,15 @@ static void __grp_src_query_marked_and_rexmit(struct net_bridge_port_group *pg) mod_timer(&pg->rexmit_timer, lmi); } -static void __grp_send_query_and_rexmit(struct net_bridge_port_group *pg) +static void __grp_send_query_and_rexmit(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, + struct net_bridge_port_group *pg) { struct bridge_mcast_other_query *other_query = NULL; - struct net_bridge *br = pg->key.port->br; - struct net_bridge_mcast *brmctx = &br->multicast_ctx; unsigned long now = jiffies, lmi; - if (!netif_running(br->dev) || - !br_opt_get(br, BROPT_MULTICAST_ENABLED)) + if (!netif_running(brmctx->br->dev) || + !br_opt_get(brmctx->br, BROPT_MULTICAST_ENABLED)) return; if (pg->key.addr.proto == htons(ETH_P_IP)) @@ -1893,11 +1909,11 @@ static void __grp_send_query_and_rexmit(struct net_bridge_port_group *pg) other_query = &brmctx->ip6_other_query; #endif - if (br_opt_get(br, BROPT_MULTICAST_QUERIER) && + if (br_opt_get(brmctx->br, BROPT_MULTICAST_QUERIER) && other_query && !timer_pending(&other_query->timer)) { lmi = now + brmctx->multicast_last_member_interval; pg->grp_query_rexmit_cnt = brmctx->multicast_last_member_count - 1; - __br_multicast_send_query(br, pg->key.port, pg, &pg->key.addr, + __br_multicast_send_query(brmctx, pmctx, pg, &pg->key.addr, &pg->key.addr, false, 0, NULL); if (!timer_pending(&pg->rexmit_timer) || time_after(pg->rexmit_timer.expires, lmi)) @@ -1906,8 +1922,8 @@ static void __grp_send_query_and_rexmit(struct net_bridge_port_group *pg) if (pg->filter_mode == MCAST_EXCLUDE && (!timer_pending(&pg->timer) || - time_after(pg->timer.expires, now + br_multicast_lmqt(br)))) - mod_timer(&pg->timer, now + br_multicast_lmqt(br)); + time_after(pg->timer.expires, now + br_multicast_lmqt(brmctx)))) + mod_timer(&pg->timer, now + br_multicast_lmqt(brmctx)); } /* State Msg type New state Actions @@ -1915,11 +1931,11 @@ static void __grp_send_query_and_rexmit(struct net_bridge_port_group *pg) * INCLUDE (A) ALLOW (B) INCLUDE (A+B) (B)=GMI * EXCLUDE (X,Y) ALLOW (A) EXCLUDE (X+A,Y-A) (A)=GMI */ -static bool br_multicast_isinc_allow(struct net_bridge_port_group *pg, void *h_addr, +static bool br_multicast_isinc_allow(const struct net_bridge_mcast *brmctx, + struct net_bridge_port_group *pg, void *h_addr, void *srcs, u32 nsrcs, size_t addr_size, int grec_type) { - struct net_bridge *br = pg->key.port->br; struct net_bridge_group_src *ent; unsigned long now = jiffies; bool changed = false; @@ -1938,10 +1954,11 @@ static bool br_multicast_isinc_allow(struct net_bridge_port_group *pg, void *h_a } if (ent) - __grp_src_mod_timer(ent, now + br_multicast_gmi(br)); + __grp_src_mod_timer(ent, now + br_multicast_gmi(brmctx)); } - if (br_multicast_eht_handle(pg, h_addr, srcs, nsrcs, addr_size, grec_type)) + if (br_multicast_eht_handle(brmctx, pg, h_addr, srcs, nsrcs, addr_size, + grec_type)) changed = true; return changed; @@ -1952,7 +1969,8 @@ static bool br_multicast_isinc_allow(struct net_bridge_port_group *pg, void *h_a * Delete (A-B) * Group Timer=GMI */ -static void __grp_src_isexc_incl(struct net_bridge_port_group *pg, void *h_addr, +static void __grp_src_isexc_incl(const struct net_bridge_mcast *brmctx, + struct net_bridge_port_group *pg, void *h_addr, void *srcs, u32 nsrcs, size_t addr_size, int grec_type) { @@ -1976,7 +1994,8 @@ static void __grp_src_isexc_incl(struct net_bridge_port_group *pg, void *h_addr, br_multicast_fwd_src_handle(ent); } - br_multicast_eht_handle(pg, h_addr, srcs, nsrcs, addr_size, grec_type); + br_multicast_eht_handle(brmctx, pg, h_addr, srcs, nsrcs, addr_size, + grec_type); __grp_src_delete_marked(pg); } @@ -1987,11 +2006,11 @@ static void __grp_src_isexc_incl(struct net_bridge_port_group *pg, void *h_addr, * Delete (Y-A) * Group Timer=GMI */ -static bool __grp_src_isexc_excl(struct net_bridge_port_group *pg, void *h_addr, +static bool __grp_src_isexc_excl(const struct net_bridge_mcast *brmctx, + struct net_bridge_port_group *pg, void *h_addr, void *srcs, u32 nsrcs, size_t addr_size, int grec_type) { - struct net_bridge *br = pg->key.port->br; struct net_bridge_group_src *ent; unsigned long now = jiffies; bool changed = false; @@ -2012,13 +2031,14 @@ static bool __grp_src_isexc_excl(struct net_bridge_port_group *pg, void *h_addr, ent = br_multicast_new_group_src(pg, &src_ip); if (ent) { __grp_src_mod_timer(ent, - now + br_multicast_gmi(br)); + now + br_multicast_gmi(brmctx)); changed = true; } } } - if (br_multicast_eht_handle(pg, h_addr, srcs, nsrcs, addr_size, grec_type)) + if (br_multicast_eht_handle(brmctx, pg, h_addr, srcs, nsrcs, addr_size, + grec_type)) changed = true; if (__grp_src_delete_marked(pg)) @@ -2027,28 +2047,28 @@ static bool __grp_src_isexc_excl(struct net_bridge_port_group *pg, void *h_addr, return changed; } -static bool br_multicast_isexc(struct net_bridge_port_group *pg, void *h_addr, +static bool br_multicast_isexc(const struct net_bridge_mcast *brmctx, + struct net_bridge_port_group *pg, void *h_addr, void *srcs, u32 nsrcs, size_t addr_size, int grec_type) { - struct net_bridge *br = pg->key.port->br; bool changed = false; switch (pg->filter_mode) { case MCAST_INCLUDE: - __grp_src_isexc_incl(pg, h_addr, srcs, nsrcs, addr_size, + __grp_src_isexc_incl(brmctx, pg, h_addr, srcs, nsrcs, addr_size, grec_type); br_multicast_star_g_handle_mode(pg, MCAST_EXCLUDE); changed = true; break; case MCAST_EXCLUDE: - changed = __grp_src_isexc_excl(pg, h_addr, srcs, nsrcs, addr_size, - grec_type); + changed = __grp_src_isexc_excl(brmctx, pg, h_addr, srcs, nsrcs, + addr_size, grec_type); break; } pg->filter_mode = MCAST_EXCLUDE; - mod_timer(&pg->timer, jiffies + br_multicast_gmi(br)); + mod_timer(&pg->timer, jiffies + br_multicast_gmi(brmctx)); return changed; } @@ -2057,11 +2077,12 @@ static bool br_multicast_isexc(struct net_bridge_port_group *pg, void *h_addr, * INCLUDE (A) TO_IN (B) INCLUDE (A+B) (B)=GMI * Send Q(G,A-B) */ -static bool __grp_src_toin_incl(struct net_bridge_port_group *pg, void *h_addr, +static bool __grp_src_toin_incl(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, + struct net_bridge_port_group *pg, void *h_addr, void *srcs, u32 nsrcs, size_t addr_size, int grec_type) { - struct net_bridge *br = pg->key.port->br; u32 src_idx, to_send = pg->src_ents; struct net_bridge_group_src *ent; unsigned long now = jiffies; @@ -2085,14 +2106,15 @@ static bool __grp_src_toin_incl(struct net_bridge_port_group *pg, void *h_addr, changed = true; } if (ent) - __grp_src_mod_timer(ent, now + br_multicast_gmi(br)); + __grp_src_mod_timer(ent, now + br_multicast_gmi(brmctx)); } - if (br_multicast_eht_handle(pg, h_addr, srcs, nsrcs, addr_size, grec_type)) + if (br_multicast_eht_handle(brmctx, pg, h_addr, srcs, nsrcs, addr_size, + grec_type)) changed = true; if (to_send) - __grp_src_query_marked_and_rexmit(pg); + __grp_src_query_marked_and_rexmit(brmctx, pmctx, pg); return changed; } @@ -2102,11 +2124,12 @@ static bool __grp_src_toin_incl(struct net_bridge_port_group *pg, void *h_addr, * Send Q(G,X-A) * Send Q(G) */ -static bool __grp_src_toin_excl(struct net_bridge_port_group *pg, void *h_addr, +static bool __grp_src_toin_excl(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, + struct net_bridge_port_group *pg, void *h_addr, void *srcs, u32 nsrcs, size_t addr_size, int grec_type) { - struct net_bridge *br = pg->key.port->br; u32 src_idx, to_send = pg->src_ents; struct net_bridge_group_src *ent; unsigned long now = jiffies; @@ -2133,21 +2156,24 @@ static bool __grp_src_toin_excl(struct net_bridge_port_group *pg, void *h_addr, changed = true; } if (ent) - __grp_src_mod_timer(ent, now + br_multicast_gmi(br)); + __grp_src_mod_timer(ent, now + br_multicast_gmi(brmctx)); } - if (br_multicast_eht_handle(pg, h_addr, srcs, nsrcs, addr_size, grec_type)) + if (br_multicast_eht_handle(brmctx, pg, h_addr, srcs, nsrcs, addr_size, + grec_type)) changed = true; if (to_send) - __grp_src_query_marked_and_rexmit(pg); + __grp_src_query_marked_and_rexmit(brmctx, pmctx, pg); - __grp_send_query_and_rexmit(pg); + __grp_send_query_and_rexmit(brmctx, pmctx, pg); return changed; } -static bool br_multicast_toin(struct net_bridge_port_group *pg, void *h_addr, +static bool br_multicast_toin(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, + struct net_bridge_port_group *pg, void *h_addr, void *srcs, u32 nsrcs, size_t addr_size, int grec_type) { @@ -2155,12 +2181,12 @@ static bool br_multicast_toin(struct net_bridge_port_group *pg, void *h_addr, switch (pg->filter_mode) { case MCAST_INCLUDE: - changed = __grp_src_toin_incl(pg, h_addr, srcs, nsrcs, addr_size, - grec_type); + changed = __grp_src_toin_incl(brmctx, pmctx, pg, h_addr, srcs, + nsrcs, addr_size, grec_type); break; case MCAST_EXCLUDE: - changed = __grp_src_toin_excl(pg, h_addr, srcs, nsrcs, addr_size, - grec_type); + changed = __grp_src_toin_excl(brmctx, pmctx, pg, h_addr, srcs, + nsrcs, addr_size, grec_type); break; } @@ -2182,7 +2208,9 @@ static bool br_multicast_toin(struct net_bridge_port_group *pg, void *h_addr, * Send Q(G,A*B) * Group Timer=GMI */ -static void __grp_src_toex_incl(struct net_bridge_port_group *pg, void *h_addr, +static void __grp_src_toex_incl(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, + struct net_bridge_port_group *pg, void *h_addr, void *srcs, u32 nsrcs, size_t addr_size, int grec_type) { @@ -2209,11 +2237,12 @@ static void __grp_src_toex_incl(struct net_bridge_port_group *pg, void *h_addr, br_multicast_fwd_src_handle(ent); } - br_multicast_eht_handle(pg, h_addr, srcs, nsrcs, addr_size, grec_type); + br_multicast_eht_handle(brmctx, pg, h_addr, srcs, nsrcs, addr_size, + grec_type); __grp_src_delete_marked(pg); if (to_send) - __grp_src_query_marked_and_rexmit(pg); + __grp_src_query_marked_and_rexmit(brmctx, pmctx, pg); } /* State Msg type New state Actions @@ -2223,7 +2252,9 @@ static void __grp_src_toex_incl(struct net_bridge_port_group *pg, void *h_addr, * Send Q(G,A-Y) * Group Timer=GMI */ -static bool __grp_src_toex_excl(struct net_bridge_port_group *pg, void *h_addr, +static bool __grp_src_toex_excl(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, + struct net_bridge_port_group *pg, void *h_addr, void *srcs, u32 nsrcs, size_t addr_size, int grec_type) { @@ -2255,39 +2286,41 @@ static bool __grp_src_toex_excl(struct net_bridge_port_group *pg, void *h_addr, } } - if (br_multicast_eht_handle(pg, h_addr, srcs, nsrcs, addr_size, grec_type)) + if (br_multicast_eht_handle(brmctx, pg, h_addr, srcs, nsrcs, addr_size, + grec_type)) changed = true; if (__grp_src_delete_marked(pg)) changed = true; if (to_send) - __grp_src_query_marked_and_rexmit(pg); + __grp_src_query_marked_and_rexmit(brmctx, pmctx, pg); return changed; } -static bool br_multicast_toex(struct net_bridge_port_group *pg, void *h_addr, +static bool br_multicast_toex(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, + struct net_bridge_port_group *pg, void *h_addr, void *srcs, u32 nsrcs, size_t addr_size, int grec_type) { - struct net_bridge *br = pg->key.port->br; bool changed = false; switch (pg->filter_mode) { case MCAST_INCLUDE: - __grp_src_toex_incl(pg, h_addr, srcs, nsrcs, addr_size, - grec_type); + __grp_src_toex_incl(brmctx, pmctx, pg, h_addr, srcs, nsrcs, + addr_size, grec_type); br_multicast_star_g_handle_mode(pg, MCAST_EXCLUDE); changed = true; break; case MCAST_EXCLUDE: - changed = __grp_src_toex_excl(pg, h_addr, srcs, nsrcs, addr_size, - grec_type); + changed = __grp_src_toex_excl(brmctx, pmctx, pg, h_addr, srcs, + nsrcs, addr_size, grec_type); break; } pg->filter_mode = MCAST_EXCLUDE; - mod_timer(&pg->timer, jiffies + br_multicast_gmi(br)); + mod_timer(&pg->timer, jiffies + br_multicast_gmi(brmctx)); return changed; } @@ -2295,7 +2328,9 @@ static bool br_multicast_toex(struct net_bridge_port_group *pg, void *h_addr, /* State Msg type New state Actions * INCLUDE (A) BLOCK (B) INCLUDE (A) Send Q(G,A*B) */ -static bool __grp_src_block_incl(struct net_bridge_port_group *pg, void *h_addr, +static bool __grp_src_block_incl(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, + struct net_bridge_port_group *pg, void *h_addr, void *srcs, u32 nsrcs, size_t addr_size, int grec_type) { struct net_bridge_group_src *ent; @@ -2317,11 +2352,12 @@ static bool __grp_src_block_incl(struct net_bridge_port_group *pg, void *h_addr, } } - if (br_multicast_eht_handle(pg, h_addr, srcs, nsrcs, addr_size, grec_type)) + if (br_multicast_eht_handle(brmctx, pg, h_addr, srcs, nsrcs, addr_size, + grec_type)) changed = true; if (to_send) - __grp_src_query_marked_and_rexmit(pg); + __grp_src_query_marked_and_rexmit(brmctx, pmctx, pg); return changed; } @@ -2330,7 +2366,9 @@ static bool __grp_src_block_incl(struct net_bridge_port_group *pg, void *h_addr, * EXCLUDE (X,Y) BLOCK (A) EXCLUDE (X+(A-Y),Y) (A-X-Y)=Group Timer * Send Q(G,A-Y) */ -static bool __grp_src_block_excl(struct net_bridge_port_group *pg, void *h_addr, +static bool __grp_src_block_excl(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, + struct net_bridge_port_group *pg, void *h_addr, void *srcs, u32 nsrcs, size_t addr_size, int grec_type) { struct net_bridge_group_src *ent; @@ -2359,28 +2397,31 @@ static bool __grp_src_block_excl(struct net_bridge_port_group *pg, void *h_addr, } } - if (br_multicast_eht_handle(pg, h_addr, srcs, nsrcs, addr_size, grec_type)) + if (br_multicast_eht_handle(brmctx, pg, h_addr, srcs, nsrcs, addr_size, + grec_type)) changed = true; if (to_send) - __grp_src_query_marked_and_rexmit(pg); + __grp_src_query_marked_and_rexmit(brmctx, pmctx, pg); return changed; } -static bool br_multicast_block(struct net_bridge_port_group *pg, void *h_addr, +static bool br_multicast_block(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, + struct net_bridge_port_group *pg, void *h_addr, void *srcs, u32 nsrcs, size_t addr_size, int grec_type) { bool changed = false; switch (pg->filter_mode) { case MCAST_INCLUDE: - changed = __grp_src_block_incl(pg, h_addr, srcs, nsrcs, addr_size, - grec_type); + changed = __grp_src_block_incl(brmctx, pmctx, pg, h_addr, srcs, + nsrcs, addr_size, grec_type); break; case MCAST_EXCLUDE: - changed = __grp_src_block_excl(pg, h_addr, srcs, nsrcs, addr_size, - grec_type); + changed = __grp_src_block_excl(brmctx, pmctx, pg, h_addr, srcs, + nsrcs, addr_size, grec_type); break; } @@ -2415,12 +2456,12 @@ br_multicast_find_port(struct net_bridge_mdb_entry *mp, return NULL; } -static int br_ip4_multicast_igmp3_report(struct net_bridge *br, - struct net_bridge_port *port, +static int br_ip4_multicast_igmp3_report(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, struct sk_buff *skb, u16 vid) { - bool igmpv2 = br->multicast_ctx.multicast_igmp_version == 2; + bool igmpv2 = brmctx->multicast_igmp_version == 2; struct net_bridge_mdb_entry *mdst; struct net_bridge_port_group *pg; const unsigned char *src; @@ -2467,25 +2508,26 @@ static int br_ip4_multicast_igmp3_report(struct net_bridge *br, if (nsrcs == 0 && (type == IGMPV3_CHANGE_TO_INCLUDE || type == IGMPV3_MODE_IS_INCLUDE)) { - if (!port || igmpv2) { - br_ip4_multicast_leave_group(br, port, group, vid, src); + if (!pmctx || igmpv2) { + br_ip4_multicast_leave_group(brmctx, pmctx, + group, vid, src); continue; } } else { - err = br_ip4_multicast_add_group(br, port, group, vid, - src, igmpv2); + err = br_ip4_multicast_add_group(brmctx, pmctx, group, + vid, src, igmpv2); if (err) break; } - if (!port || igmpv2) + if (!pmctx || igmpv2) continue; - spin_lock_bh(&br->multicast_lock); - mdst = br_mdb_ip4_get(br, group, vid); + spin_lock_bh(&brmctx->br->multicast_lock); + mdst = br_mdb_ip4_get(brmctx->br, group, vid); if (!mdst) goto unlock_continue; - pg = br_multicast_find_port(mdst, port, src); + pg = br_multicast_find_port(mdst, pmctx->port, src); if (!pg || (pg->flags & MDB_PG_FLAGS_PERMANENT)) goto unlock_continue; /* reload grec and host addr */ @@ -2493,46 +2535,52 @@ static int br_ip4_multicast_igmp3_report(struct net_bridge *br, h_addr = &ip_hdr(skb)->saddr; switch (type) { case IGMPV3_ALLOW_NEW_SOURCES: - changed = br_multicast_isinc_allow(pg, h_addr, grec->grec_src, + changed = br_multicast_isinc_allow(brmctx, pg, h_addr, + grec->grec_src, nsrcs, sizeof(__be32), type); break; case IGMPV3_MODE_IS_INCLUDE: - changed = br_multicast_isinc_allow(pg, h_addr, grec->grec_src, + changed = br_multicast_isinc_allow(brmctx, pg, h_addr, + grec->grec_src, nsrcs, sizeof(__be32), type); break; case IGMPV3_MODE_IS_EXCLUDE: - changed = br_multicast_isexc(pg, h_addr, grec->grec_src, + changed = br_multicast_isexc(brmctx, pg, h_addr, + grec->grec_src, nsrcs, sizeof(__be32), type); break; case IGMPV3_CHANGE_TO_INCLUDE: - changed = br_multicast_toin(pg, h_addr, grec->grec_src, + changed = br_multicast_toin(brmctx, pmctx, pg, h_addr, + grec->grec_src, nsrcs, sizeof(__be32), type); break; case IGMPV3_CHANGE_TO_EXCLUDE: - changed = br_multicast_toex(pg, h_addr, grec->grec_src, + changed = br_multicast_toex(brmctx, pmctx, pg, h_addr, + grec->grec_src, nsrcs, sizeof(__be32), type); break; case IGMPV3_BLOCK_OLD_SOURCES: - changed = br_multicast_block(pg, h_addr, grec->grec_src, + changed = br_multicast_block(brmctx, pmctx, pg, h_addr, + grec->grec_src, nsrcs, sizeof(__be32), type); break; } if (changed) - br_mdb_notify(br->dev, mdst, pg, RTM_NEWMDB); + br_mdb_notify(brmctx->br->dev, mdst, pg, RTM_NEWMDB); unlock_continue: - spin_unlock_bh(&br->multicast_lock); + spin_unlock_bh(&brmctx->br->multicast_lock); } return err; } #if IS_ENABLED(CONFIG_IPV6) -static int br_ip6_multicast_mld2_report(struct net_bridge *br, - struct net_bridge_port *port, +static int br_ip6_multicast_mld2_report(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, struct sk_buff *skb, u16 vid) { - bool mldv1 = br->multicast_ctx.multicast_mld_version == 1; + bool mldv1 = brmctx->multicast_mld_version == 1; struct net_bridge_mdb_entry *mdst; struct net_bridge_port_group *pg; unsigned int nsrcs_offset; @@ -2593,85 +2641,83 @@ static int br_ip6_multicast_mld2_report(struct net_bridge *br, if ((grec->grec_type == MLD2_CHANGE_TO_INCLUDE || grec->grec_type == MLD2_MODE_IS_INCLUDE) && nsrcs == 0) { - if (!port || mldv1) { - br_ip6_multicast_leave_group(br, port, + if (!pmctx || mldv1) { + br_ip6_multicast_leave_group(brmctx, pmctx, &grec->grec_mca, vid, src); continue; } } else { - err = br_ip6_multicast_add_group(br, port, + err = br_ip6_multicast_add_group(brmctx, pmctx, &grec->grec_mca, vid, src, mldv1); if (err) break; } - if (!port || mldv1) + if (!pmctx || mldv1) continue; - spin_lock_bh(&br->multicast_lock); - mdst = br_mdb_ip6_get(br, &grec->grec_mca, vid); + spin_lock_bh(&brmctx->br->multicast_lock); + mdst = br_mdb_ip6_get(brmctx->br, &grec->grec_mca, vid); if (!mdst) goto unlock_continue; - pg = br_multicast_find_port(mdst, port, src); + pg = br_multicast_find_port(mdst, pmctx->port, src); if (!pg || (pg->flags & MDB_PG_FLAGS_PERMANENT)) goto unlock_continue; h_addr = &ipv6_hdr(skb)->saddr; switch (grec->grec_type) { case MLD2_ALLOW_NEW_SOURCES: - changed = br_multicast_isinc_allow(pg, h_addr, + changed = br_multicast_isinc_allow(brmctx, pg, h_addr, grec->grec_src, nsrcs, sizeof(struct in6_addr), grec->grec_type); break; case MLD2_MODE_IS_INCLUDE: - changed = br_multicast_isinc_allow(pg, h_addr, + changed = br_multicast_isinc_allow(brmctx, pg, h_addr, grec->grec_src, nsrcs, sizeof(struct in6_addr), grec->grec_type); break; case MLD2_MODE_IS_EXCLUDE: - changed = br_multicast_isexc(pg, h_addr, + changed = br_multicast_isexc(brmctx, pg, h_addr, grec->grec_src, nsrcs, sizeof(struct in6_addr), grec->grec_type); break; case MLD2_CHANGE_TO_INCLUDE: - changed = br_multicast_toin(pg, h_addr, + changed = br_multicast_toin(brmctx, pmctx, pg, h_addr, grec->grec_src, nsrcs, sizeof(struct in6_addr), grec->grec_type); break; case MLD2_CHANGE_TO_EXCLUDE: - changed = br_multicast_toex(pg, h_addr, + changed = br_multicast_toex(brmctx, pmctx, pg, h_addr, grec->grec_src, nsrcs, sizeof(struct in6_addr), grec->grec_type); break; case MLD2_BLOCK_OLD_SOURCES: - changed = br_multicast_block(pg, h_addr, + changed = br_multicast_block(brmctx, pmctx, pg, h_addr, grec->grec_src, nsrcs, sizeof(struct in6_addr), grec->grec_type); break; } if (changed) - br_mdb_notify(br->dev, mdst, pg, RTM_NEWMDB); + br_mdb_notify(brmctx->br->dev, mdst, pg, RTM_NEWMDB); unlock_continue: - spin_unlock_bh(&br->multicast_lock); + spin_unlock_bh(&brmctx->br->multicast_lock); } return err; } #endif -static bool br_ip4_multicast_select_querier(struct net_bridge *br, +static bool br_ip4_multicast_select_querier(struct net_bridge_mcast *brmctx, struct net_bridge_port *port, __be32 saddr) { - struct net_bridge_mcast *brmctx = &br->multicast_ctx; - if (!timer_pending(&brmctx->ip4_own_query.timer) && !timer_pending(&brmctx->ip4_other_query.timer)) goto update; @@ -2694,12 +2740,10 @@ static bool br_ip4_multicast_select_querier(struct net_bridge *br, } #if IS_ENABLED(CONFIG_IPV6) -static bool br_ip6_multicast_select_querier(struct net_bridge *br, +static bool br_ip6_multicast_select_querier(struct net_bridge_mcast *brmctx, struct net_bridge_port *port, struct in6_addr *saddr) { - struct net_bridge_mcast *brmctx = &br->multicast_ctx; - if (!timer_pending(&brmctx->ip6_own_query.timer) && !timer_pending(&brmctx->ip6_other_query.timer)) goto update; @@ -2720,15 +2764,14 @@ static bool br_ip6_multicast_select_querier(struct net_bridge *br, #endif static void -br_multicast_update_query_timer(struct net_bridge *br, +br_multicast_update_query_timer(struct net_bridge_mcast *brmctx, struct bridge_mcast_other_query *query, unsigned long max_delay) { if (!timer_pending(&query->timer)) query->delay_time = jiffies + max_delay; - mod_timer(&query->timer, jiffies + - br->multicast_ctx.multicast_querier_interval); + mod_timer(&query->timer, jiffies + brmctx->multicast_querier_interval); } static void br_port_mc_router_state_change(struct net_bridge_port *p, @@ -2785,14 +2828,14 @@ br_multicast_get_rport_slot(struct net_bridge_mcast *brmctx, return slot; } -static bool br_multicast_no_router_otherpf(struct net_bridge_port *port, +static bool br_multicast_no_router_otherpf(struct net_bridge_mcast_port *pmctx, struct hlist_node *rnode) { #if IS_ENABLED(CONFIG_IPV6) - if (rnode != &port->multicast_ctx.ip6_rlist) - return hlist_unhashed(&port->multicast_ctx.ip6_rlist); + if (rnode != &pmctx->ip6_rlist) + return hlist_unhashed(&pmctx->ip6_rlist); else - return hlist_unhashed(&port->multicast_ctx.ip4_rlist); + return hlist_unhashed(&pmctx->ip4_rlist); #else return true; #endif @@ -2803,7 +2846,7 @@ static bool br_multicast_no_router_otherpf(struct net_bridge_port *port, * and locked by br->multicast_lock and RCU */ static void br_multicast_add_router(struct net_bridge_mcast *brmctx, - struct net_bridge_port *port, + struct net_bridge_mcast_port *pmctx, struct hlist_node *rlist, struct hlist_head *mc_router_list) { @@ -2812,7 +2855,7 @@ static void br_multicast_add_router(struct net_bridge_mcast *brmctx, if (!hlist_unhashed(rlist)) return; - slot = br_multicast_get_rport_slot(brmctx, port, mc_router_list); + slot = br_multicast_get_rport_slot(brmctx, pmctx->port, mc_router_list); if (slot) hlist_add_behind_rcu(rlist, slot); @@ -2823,9 +2866,9 @@ static void br_multicast_add_router(struct net_bridge_mcast *brmctx, * switched from no IPv4/IPv6 multicast router to a new * IPv4 or IPv6 multicast router. */ - if (br_multicast_no_router_otherpf(port, rlist)) { - br_rtr_notify(port->br->dev, port, RTM_NEWMDB); - br_port_mc_router_state_change(port, true); + if (br_multicast_no_router_otherpf(pmctx, rlist)) { + br_rtr_notify(pmctx->port->br->dev, pmctx->port, RTM_NEWMDB); + br_port_mc_router_state_change(pmctx->port, true); } } @@ -2833,123 +2876,119 @@ static void br_multicast_add_router(struct net_bridge_mcast *brmctx, * list is maintained ordered by pointer value * and locked by br->multicast_lock and RCU */ -static void br_ip4_multicast_add_router(struct net_bridge *br, - struct net_bridge_port *port) +static void br_ip4_multicast_add_router(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx) { - br_multicast_add_router(&br->multicast_ctx, port, - &port->multicast_ctx.ip4_rlist, - &br->multicast_ctx.ip4_mc_router_list); + br_multicast_add_router(brmctx, pmctx, &pmctx->ip4_rlist, + &brmctx->ip4_mc_router_list); } /* Add port to router_list * list is maintained ordered by pointer value * and locked by br->multicast_lock and RCU */ -static void br_ip6_multicast_add_router(struct net_bridge *br, - struct net_bridge_port *port) +static void br_ip6_multicast_add_router(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx) { #if IS_ENABLED(CONFIG_IPV6) - br_multicast_add_router(&br->multicast_ctx, port, - &port->multicast_ctx.ip6_rlist, - &br->multicast_ctx.ip6_mc_router_list); + br_multicast_add_router(brmctx, pmctx, &pmctx->ip6_rlist, + &brmctx->ip6_mc_router_list); #endif } -static void br_multicast_mark_router(struct net_bridge *br, - struct net_bridge_port *port, +static void br_multicast_mark_router(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, struct timer_list *timer, struct hlist_node *rlist, struct hlist_head *mc_router_list) { - struct net_bridge_mcast *brmctx = &br->multicast_ctx; unsigned long now = jiffies; - if (!port) { + if (!pmctx) { if (brmctx->multicast_router == MDB_RTR_TYPE_TEMP_QUERY) { if (!br_ip4_multicast_is_router(brmctx) && !br_ip6_multicast_is_router(brmctx)) - br_mc_router_state_change(br, true); + br_mc_router_state_change(brmctx->br, true); mod_timer(timer, now + brmctx->multicast_querier_interval); } return; } - if (port->multicast_ctx.multicast_router == MDB_RTR_TYPE_DISABLED || - port->multicast_ctx.multicast_router == MDB_RTR_TYPE_PERM) + if (pmctx->multicast_router == MDB_RTR_TYPE_DISABLED || + pmctx->multicast_router == MDB_RTR_TYPE_PERM) return; - br_multicast_add_router(brmctx, port, rlist, mc_router_list); + br_multicast_add_router(brmctx, pmctx, rlist, mc_router_list); mod_timer(timer, now + brmctx->multicast_querier_interval); } -static void br_ip4_multicast_mark_router(struct net_bridge *br, - struct net_bridge_port *port) +static void br_ip4_multicast_mark_router(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx) { - struct timer_list *timer = &br->multicast_ctx.ip4_mc_router_timer; + struct timer_list *timer = &brmctx->ip4_mc_router_timer; struct hlist_node *rlist = NULL; - if (port) { - timer = &port->multicast_ctx.ip4_mc_router_timer; - rlist = &port->multicast_ctx.ip4_rlist; + if (pmctx) { + timer = &pmctx->ip4_mc_router_timer; + rlist = &pmctx->ip4_rlist; } - br_multicast_mark_router(br, port, timer, rlist, - &br->multicast_ctx.ip4_mc_router_list); + br_multicast_mark_router(brmctx, pmctx, timer, rlist, + &brmctx->ip4_mc_router_list); } -static void br_ip6_multicast_mark_router(struct net_bridge *br, - struct net_bridge_port *port) +static void br_ip6_multicast_mark_router(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx) { #if IS_ENABLED(CONFIG_IPV6) - struct timer_list *timer = &br->multicast_ctx.ip6_mc_router_timer; + struct timer_list *timer = &brmctx->ip6_mc_router_timer; struct hlist_node *rlist = NULL; - if (port) { - timer = &port->multicast_ctx.ip6_mc_router_timer; - rlist = &port->multicast_ctx.ip6_rlist; + if (pmctx) { + timer = &pmctx->ip6_mc_router_timer; + rlist = &pmctx->ip6_rlist; } - br_multicast_mark_router(br, port, timer, rlist, - &br->multicast_ctx.ip6_mc_router_list); + br_multicast_mark_router(brmctx, pmctx, timer, rlist, + &brmctx->ip6_mc_router_list); #endif } static void -br_ip4_multicast_query_received(struct net_bridge *br, - struct net_bridge_port *port, +br_ip4_multicast_query_received(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, struct bridge_mcast_other_query *query, struct br_ip *saddr, unsigned long max_delay) { - if (!br_ip4_multicast_select_querier(br, port, saddr->src.ip4)) + if (!br_ip4_multicast_select_querier(brmctx, pmctx->port, saddr->src.ip4)) return; - br_multicast_update_query_timer(br, query, max_delay); - br_ip4_multicast_mark_router(br, port); + br_multicast_update_query_timer(brmctx, query, max_delay); + br_ip4_multicast_mark_router(brmctx, pmctx); } #if IS_ENABLED(CONFIG_IPV6) static void -br_ip6_multicast_query_received(struct net_bridge *br, - struct net_bridge_port *port, +br_ip6_multicast_query_received(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, struct bridge_mcast_other_query *query, struct br_ip *saddr, unsigned long max_delay) { - if (!br_ip6_multicast_select_querier(br, port, &saddr->src.ip6)) + if (!br_ip6_multicast_select_querier(brmctx, pmctx->port, &saddr->src.ip6)) return; - br_multicast_update_query_timer(br, query, max_delay); - br_ip6_multicast_mark_router(br, port); + br_multicast_update_query_timer(brmctx, query, max_delay); + br_ip6_multicast_mark_router(brmctx, pmctx); } #endif -static void br_ip4_multicast_query(struct net_bridge *br, - struct net_bridge_port *port, +static void br_ip4_multicast_query(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, struct sk_buff *skb, u16 vid) { - struct net_bridge_mcast *brmctx = &br->multicast_ctx; unsigned int transport_len = ip_transport_len(skb); const struct iphdr *iph = ip_hdr(skb); struct igmphdr *ih = igmp_hdr(skb); @@ -2962,9 +3001,9 @@ static void br_ip4_multicast_query(struct net_bridge *br, unsigned long now = jiffies; __be32 group; - spin_lock(&br->multicast_lock); - if (!netif_running(br->dev) || - (port && port->state == BR_STATE_DISABLED)) + spin_lock(&brmctx->br->multicast_lock); + if (!netif_running(brmctx->br->dev) || + (pmctx && pmctx->port->state == BR_STATE_DISABLED)) goto out; group = ih->group; @@ -2993,13 +3032,13 @@ static void br_ip4_multicast_query(struct net_bridge *br, saddr.proto = htons(ETH_P_IP); saddr.src.ip4 = iph->saddr; - br_ip4_multicast_query_received(br, port, + br_ip4_multicast_query_received(brmctx, pmctx, &brmctx->ip4_other_query, &saddr, max_delay); goto out; } - mp = br_mdb_ip4_get(br, group, vid); + mp = br_mdb_ip4_get(brmctx->br, group, vid); if (!mp) goto out; @@ -3012,7 +3051,7 @@ static void br_ip4_multicast_query(struct net_bridge *br, mod_timer(&mp->timer, now + max_delay); for (pp = &mp->ports; - (p = mlock_dereference(*pp, br)) != NULL; + (p = mlock_dereference(*pp, brmctx->br)) != NULL; pp = &p->next) { if (timer_pending(&p->timer) ? time_after(p->timer.expires, now + max_delay) : @@ -3023,16 +3062,15 @@ static void br_ip4_multicast_query(struct net_bridge *br, } out: - spin_unlock(&br->multicast_lock); + spin_unlock(&brmctx->br->multicast_lock); } #if IS_ENABLED(CONFIG_IPV6) -static int br_ip6_multicast_query(struct net_bridge *br, - struct net_bridge_port *port, +static int br_ip6_multicast_query(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, struct sk_buff *skb, u16 vid) { - struct net_bridge_mcast *brmctx = &br->multicast_ctx; unsigned int transport_len = ipv6_transport_len(skb); struct mld_msg *mld; struct net_bridge_mdb_entry *mp; @@ -3047,9 +3085,9 @@ static int br_ip6_multicast_query(struct net_bridge *br, bool is_general_query; int err = 0; - spin_lock(&br->multicast_lock); - if (!netif_running(br->dev) || - (port && port->state == BR_STATE_DISABLED)) + spin_lock(&brmctx->br->multicast_lock); + if (!netif_running(brmctx->br->dev) || + (pmctx && pmctx->port->state == BR_STATE_DISABLED)) goto out; if (transport_len == sizeof(*mld)) { @@ -3083,7 +3121,7 @@ static int br_ip6_multicast_query(struct net_bridge *br, saddr.proto = htons(ETH_P_IPV6); saddr.src.ip6 = ipv6_hdr(skb)->saddr; - br_ip6_multicast_query_received(br, port, + br_ip6_multicast_query_received(brmctx, pmctx, &brmctx->ip6_other_query, &saddr, max_delay); goto out; @@ -3091,7 +3129,7 @@ static int br_ip6_multicast_query(struct net_bridge *br, goto out; } - mp = br_mdb_ip6_get(br, group, vid); + mp = br_mdb_ip6_get(brmctx->br, group, vid); if (!mp) goto out; @@ -3103,7 +3141,7 @@ static int br_ip6_multicast_query(struct net_bridge *br, mod_timer(&mp->timer, now + max_delay); for (pp = &mp->ports; - (p = mlock_dereference(*pp, br)) != NULL; + (p = mlock_dereference(*pp, brmctx->br)) != NULL; pp = &p->next) { if (timer_pending(&p->timer) ? time_after(p->timer.expires, now + max_delay) : @@ -3114,41 +3152,40 @@ static int br_ip6_multicast_query(struct net_bridge *br, } out: - spin_unlock(&br->multicast_lock); + spin_unlock(&brmctx->br->multicast_lock); return err; } #endif static void -br_multicast_leave_group(struct net_bridge *br, - struct net_bridge_port *port, +br_multicast_leave_group(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, struct br_ip *group, struct bridge_mcast_other_query *other_query, struct bridge_mcast_own_query *own_query, const unsigned char *src) { - struct net_bridge_mcast *brmctx = &br->multicast_ctx; struct net_bridge_mdb_entry *mp; struct net_bridge_port_group *p; unsigned long now; unsigned long time; - spin_lock(&br->multicast_lock); - if (!netif_running(br->dev) || - (port && port->state == BR_STATE_DISABLED)) + spin_lock(&brmctx->br->multicast_lock); + if (!netif_running(brmctx->br->dev) || + (pmctx && pmctx->port->state == BR_STATE_DISABLED)) goto out; - mp = br_mdb_ip_get(br, group); + mp = br_mdb_ip_get(brmctx->br, group); if (!mp) goto out; - if (port && (port->flags & BR_MULTICAST_FAST_LEAVE)) { + if (pmctx && (pmctx->port->flags & BR_MULTICAST_FAST_LEAVE)) { struct net_bridge_port_group __rcu **pp; for (pp = &mp->ports; - (p = mlock_dereference(*pp, br)) != NULL; + (p = mlock_dereference(*pp, brmctx->br)) != NULL; pp = &p->next) { - if (!br_port_group_equal(p, port, src)) + if (!br_port_group_equal(p, pmctx->port, src)) continue; if (p->flags & MDB_PG_FLAGS_PERMANENT) @@ -3163,8 +3200,8 @@ br_multicast_leave_group(struct net_bridge *br, if (timer_pending(&other_query->timer)) goto out; - if (br_opt_get(br, BROPT_MULTICAST_QUERIER)) { - __br_multicast_send_query(br, port, NULL, NULL, &mp->addr, + if (br_opt_get(brmctx->br, BROPT_MULTICAST_QUERIER)) { + __br_multicast_send_query(brmctx, pmctx, NULL, NULL, &mp->addr, false, 0, NULL); time = jiffies + brmctx->multicast_last_member_count * @@ -3172,10 +3209,10 @@ br_multicast_leave_group(struct net_bridge *br, mod_timer(&own_query->timer, time); - for (p = mlock_dereference(mp->ports, br); + for (p = mlock_dereference(mp->ports, brmctx->br); p != NULL; - p = mlock_dereference(p->next, br)) { - if (!br_port_group_equal(p, port, src)) + p = mlock_dereference(p->next, brmctx->br)) { + if (!br_port_group_equal(p, pmctx->port, src)) continue; if (!hlist_unhashed(&p->mglist) && @@ -3193,7 +3230,7 @@ br_multicast_leave_group(struct net_bridge *br, time = now + brmctx->multicast_last_member_count * brmctx->multicast_last_member_interval; - if (!port) { + if (!pmctx) { if (mp->host_joined && (timer_pending(&mp->timer) ? time_after(mp->timer.expires, time) : @@ -3204,10 +3241,10 @@ br_multicast_leave_group(struct net_bridge *br, goto out; } - for (p = mlock_dereference(mp->ports, br); + for (p = mlock_dereference(mp->ports, brmctx->br); p != NULL; - p = mlock_dereference(p->next, br)) { - if (p->key.port != port) + p = mlock_dereference(p->next, brmctx->br)) { + if (p->key.port != pmctx->port) continue; if (!hlist_unhashed(&p->mglist) && @@ -3220,11 +3257,11 @@ br_multicast_leave_group(struct net_bridge *br, break; } out: - spin_unlock(&br->multicast_lock); + spin_unlock(&brmctx->br->multicast_lock); } -static void br_ip4_multicast_leave_group(struct net_bridge *br, - struct net_bridge_port *port, +static void br_ip4_multicast_leave_group(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, __be32 group, __u16 vid, const unsigned char *src) @@ -3235,22 +3272,21 @@ static void br_ip4_multicast_leave_group(struct net_bridge *br, if (ipv4_is_local_multicast(group)) return; - own_query = port ? &port->multicast_ctx.ip4_own_query : - &br->multicast_ctx.ip4_own_query; + own_query = pmctx ? &pmctx->ip4_own_query : &brmctx->ip4_own_query; memset(&br_group, 0, sizeof(br_group)); br_group.dst.ip4 = group; br_group.proto = htons(ETH_P_IP); br_group.vid = vid; - br_multicast_leave_group(br, port, &br_group, - &br->multicast_ctx.ip4_other_query, + br_multicast_leave_group(brmctx, pmctx, &br_group, + &brmctx->ip4_other_query, own_query, src); } #if IS_ENABLED(CONFIG_IPV6) -static void br_ip6_multicast_leave_group(struct net_bridge *br, - struct net_bridge_port *port, +static void br_ip6_multicast_leave_group(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, const struct in6_addr *group, __u16 vid, const unsigned char *src) @@ -3261,16 +3297,15 @@ static void br_ip6_multicast_leave_group(struct net_bridge *br, if (ipv6_addr_is_ll_all_nodes(group)) return; - own_query = port ? &port->multicast_ctx.ip6_own_query : - &br->multicast_ctx.ip6_own_query; + own_query = pmctx ? &pmctx->ip6_own_query : &brmctx->ip6_own_query; memset(&br_group, 0, sizeof(br_group)); br_group.dst.ip6 = *group; br_group.proto = htons(ETH_P_IPV6); br_group.vid = vid; - br_multicast_leave_group(br, port, &br_group, - &br->multicast_ctx.ip6_other_query, + br_multicast_leave_group(brmctx, pmctx, &br_group, + &brmctx->ip6_other_query, own_query, src); } #endif @@ -3308,8 +3343,8 @@ static void br_multicast_err_count(const struct net_bridge *br, u64_stats_update_end(&pstats->syncp); } -static void br_multicast_pim(struct net_bridge *br, - struct net_bridge_port *port, +static void br_multicast_pim(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, const struct sk_buff *skb) { unsigned int offset = skb_transport_offset(skb); @@ -3320,31 +3355,32 @@ static void br_multicast_pim(struct net_bridge *br, pim_hdr_type(pimhdr) != PIM_TYPE_HELLO) return; - spin_lock(&br->multicast_lock); - br_ip4_multicast_mark_router(br, port); - spin_unlock(&br->multicast_lock); + spin_lock(&brmctx->br->multicast_lock); + br_ip4_multicast_mark_router(brmctx, pmctx); + spin_unlock(&brmctx->br->multicast_lock); } -static int br_ip4_multicast_mrd_rcv(struct net_bridge *br, - struct net_bridge_port *port, +static int br_ip4_multicast_mrd_rcv(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, struct sk_buff *skb) { if (ip_hdr(skb)->protocol != IPPROTO_IGMP || igmp_hdr(skb)->type != IGMP_MRDISC_ADV) return -ENOMSG; - spin_lock(&br->multicast_lock); - br_ip4_multicast_mark_router(br, port); - spin_unlock(&br->multicast_lock); + spin_lock(&brmctx->br->multicast_lock); + br_ip4_multicast_mark_router(brmctx, pmctx); + spin_unlock(&brmctx->br->multicast_lock); return 0; } -static int br_multicast_ipv4_rcv(struct net_bridge *br, - struct net_bridge_port *port, +static int br_multicast_ipv4_rcv(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, struct sk_buff *skb, u16 vid) { + struct net_bridge_port *p = pmctx ? pmctx->port : NULL; const unsigned char *src; struct igmphdr *ih; int err; @@ -3356,14 +3392,14 @@ static int br_multicast_ipv4_rcv(struct net_bridge *br, BR_INPUT_SKB_CB(skb)->mrouters_only = 1; } else if (pim_ipv4_all_pim_routers(ip_hdr(skb)->daddr)) { if (ip_hdr(skb)->protocol == IPPROTO_PIM) - br_multicast_pim(br, port, skb); + br_multicast_pim(brmctx, pmctx, skb); } else if (ipv4_is_all_snoopers(ip_hdr(skb)->daddr)) { - br_ip4_multicast_mrd_rcv(br, port, skb); + br_ip4_multicast_mrd_rcv(brmctx, pmctx, skb); } return 0; } else if (err < 0) { - br_multicast_err_count(br, port, skb->protocol); + br_multicast_err_count(brmctx->br, p, skb->protocol); return err; } @@ -3375,44 +3411,45 @@ static int br_multicast_ipv4_rcv(struct net_bridge *br, case IGMP_HOST_MEMBERSHIP_REPORT: case IGMPV2_HOST_MEMBERSHIP_REPORT: BR_INPUT_SKB_CB(skb)->mrouters_only = 1; - err = br_ip4_multicast_add_group(br, port, ih->group, vid, src, - true); + err = br_ip4_multicast_add_group(brmctx, pmctx, ih->group, vid, + src, true); break; case IGMPV3_HOST_MEMBERSHIP_REPORT: - err = br_ip4_multicast_igmp3_report(br, port, skb, vid); + err = br_ip4_multicast_igmp3_report(brmctx, pmctx, skb, vid); break; case IGMP_HOST_MEMBERSHIP_QUERY: - br_ip4_multicast_query(br, port, skb, vid); + br_ip4_multicast_query(brmctx, pmctx, skb, vid); break; case IGMP_HOST_LEAVE_MESSAGE: - br_ip4_multicast_leave_group(br, port, ih->group, vid, src); + br_ip4_multicast_leave_group(brmctx, pmctx, ih->group, vid, src); break; } - br_multicast_count(br, port, skb, BR_INPUT_SKB_CB(skb)->igmp, + br_multicast_count(brmctx->br, p, skb, BR_INPUT_SKB_CB(skb)->igmp, BR_MCAST_DIR_RX); return err; } #if IS_ENABLED(CONFIG_IPV6) -static void br_ip6_multicast_mrd_rcv(struct net_bridge *br, - struct net_bridge_port *port, +static void br_ip6_multicast_mrd_rcv(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, struct sk_buff *skb) { if (icmp6_hdr(skb)->icmp6_type != ICMPV6_MRDISC_ADV) return; - spin_lock(&br->multicast_lock); - br_ip6_multicast_mark_router(br, port); - spin_unlock(&br->multicast_lock); + spin_lock(&brmctx->br->multicast_lock); + br_ip6_multicast_mark_router(brmctx, pmctx); + spin_unlock(&brmctx->br->multicast_lock); } -static int br_multicast_ipv6_rcv(struct net_bridge *br, - struct net_bridge_port *port, +static int br_multicast_ipv6_rcv(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, struct sk_buff *skb, u16 vid) { + struct net_bridge_port *p = pmctx ? pmctx->port : NULL; const unsigned char *src; struct mld_msg *mld; int err; @@ -3424,11 +3461,11 @@ static int br_multicast_ipv6_rcv(struct net_bridge *br, BR_INPUT_SKB_CB(skb)->mrouters_only = 1; if (err == -ENODATA && ipv6_addr_is_all_snoopers(&ipv6_hdr(skb)->daddr)) - br_ip6_multicast_mrd_rcv(br, port, skb); + br_ip6_multicast_mrd_rcv(brmctx, pmctx, skb); return 0; } else if (err < 0) { - br_multicast_err_count(br, port, skb->protocol); + br_multicast_err_count(brmctx->br, p, skb->protocol); return err; } @@ -3439,29 +3476,31 @@ static int br_multicast_ipv6_rcv(struct net_bridge *br, case ICMPV6_MGM_REPORT: src = eth_hdr(skb)->h_source; BR_INPUT_SKB_CB(skb)->mrouters_only = 1; - err = br_ip6_multicast_add_group(br, port, &mld->mld_mca, vid, - src, true); + err = br_ip6_multicast_add_group(brmctx, pmctx, &mld->mld_mca, + vid, src, true); break; case ICMPV6_MLD2_REPORT: - err = br_ip6_multicast_mld2_report(br, port, skb, vid); + err = br_ip6_multicast_mld2_report(brmctx, pmctx, skb, vid); break; case ICMPV6_MGM_QUERY: - err = br_ip6_multicast_query(br, port, skb, vid); + err = br_ip6_multicast_query(brmctx, pmctx, skb, vid); break; case ICMPV6_MGM_REDUCTION: src = eth_hdr(skb)->h_source; - br_ip6_multicast_leave_group(br, port, &mld->mld_mca, vid, src); + br_ip6_multicast_leave_group(brmctx, pmctx, &mld->mld_mca, vid, + src); break; } - br_multicast_count(br, port, skb, BR_INPUT_SKB_CB(skb)->igmp, + br_multicast_count(brmctx->br, p, skb, BR_INPUT_SKB_CB(skb)->igmp, BR_MCAST_DIR_RX); return err; } #endif -int br_multicast_rcv(struct net_bridge *br, struct net_bridge_port *port, +int br_multicast_rcv(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, struct sk_buff *skb, u16 vid) { int ret = 0; @@ -3469,16 +3508,16 @@ int br_multicast_rcv(struct net_bridge *br, struct net_bridge_port *port, BR_INPUT_SKB_CB(skb)->igmp = 0; BR_INPUT_SKB_CB(skb)->mrouters_only = 0; - if (!br_opt_get(br, BROPT_MULTICAST_ENABLED)) + if (!br_opt_get(brmctx->br, BROPT_MULTICAST_ENABLED)) return 0; switch (skb->protocol) { case htons(ETH_P_IP): - ret = br_multicast_ipv4_rcv(br, port, skb, vid); + ret = br_multicast_ipv4_rcv(brmctx, pmctx, skb, vid); break; #if IS_ENABLED(CONFIG_IPV6) case htons(ETH_P_IPV6): - ret = br_multicast_ipv6_rcv(br, port, skb, vid); + ret = br_multicast_ipv6_rcv(brmctx, pmctx, skb, vid); break; #endif } @@ -3486,17 +3525,17 @@ int br_multicast_rcv(struct net_bridge *br, struct net_bridge_port *port, return ret; } -static void br_multicast_query_expired(struct net_bridge *br, +static void br_multicast_query_expired(struct net_bridge_mcast *brmctx, struct bridge_mcast_own_query *query, struct bridge_mcast_querier *querier) { - spin_lock(&br->multicast_lock); - if (query->startup_sent < br->multicast_ctx.multicast_startup_query_count) + spin_lock(&brmctx->br->multicast_lock); + if (query->startup_sent < brmctx->multicast_startup_query_count) query->startup_sent++; RCU_INIT_POINTER(querier->port, NULL); - br_multicast_send_query(br, NULL, query); - spin_unlock(&br->multicast_lock); + br_multicast_send_query(brmctx, NULL, query); + spin_unlock(&brmctx->br->multicast_lock); } static void br_ip4_multicast_query_expired(struct timer_list *t) @@ -3504,7 +3543,7 @@ static void br_ip4_multicast_query_expired(struct timer_list *t) struct net_bridge_mcast *brmctx = from_timer(brmctx, t, ip4_own_query.timer); - br_multicast_query_expired(brmctx->br, &brmctx->ip4_own_query, + br_multicast_query_expired(brmctx, &brmctx->ip4_own_query, &brmctx->ip4_querier); } @@ -3514,7 +3553,7 @@ static void br_ip6_multicast_query_expired(struct timer_list *t) struct net_bridge_mcast *brmctx = from_timer(brmctx, t, ip6_own_query.timer); - br_multicast_query_expired(brmctx->br, &brmctx->ip6_own_query, + br_multicast_query_expired(brmctx, &brmctx->ip6_own_query, &brmctx->ip6_querier); } #endif @@ -3722,7 +3761,7 @@ int br_multicast_set_router(struct net_bridge *br, unsigned long val) } static void -br_multicast_rport_del_notify(struct net_bridge_port *p, bool deleted) +br_multicast_rport_del_notify(struct net_bridge_mcast_port *pmctx, bool deleted) { if (!deleted) return; @@ -3730,36 +3769,37 @@ br_multicast_rport_del_notify(struct net_bridge_port *p, bool deleted) /* For backwards compatibility for now, only notify if there is * no multicast router anymore for both IPv4 and IPv6. */ - if (!hlist_unhashed(&p->multicast_ctx.ip4_rlist)) + if (!hlist_unhashed(&pmctx->ip4_rlist)) return; #if IS_ENABLED(CONFIG_IPV6) - if (!hlist_unhashed(&p->multicast_ctx.ip6_rlist)) + if (!hlist_unhashed(&pmctx->ip6_rlist)) return; #endif - br_rtr_notify(p->br->dev, p, RTM_DELMDB); - br_port_mc_router_state_change(p, false); + br_rtr_notify(pmctx->port->br->dev, pmctx->port, RTM_DELMDB); + br_port_mc_router_state_change(pmctx->port, false); /* don't allow timer refresh */ - if (p->multicast_ctx.multicast_router == MDB_RTR_TYPE_TEMP) - p->multicast_ctx.multicast_router = MDB_RTR_TYPE_TEMP_QUERY; + if (pmctx->multicast_router == MDB_RTR_TYPE_TEMP) + pmctx->multicast_router = MDB_RTR_TYPE_TEMP_QUERY; } int br_multicast_set_port_router(struct net_bridge_port *p, unsigned long val) { struct net_bridge_mcast *brmctx = &p->br->multicast_ctx; + struct net_bridge_mcast_port *pmctx = &p->multicast_ctx; unsigned long now = jiffies; int err = -EINVAL; bool del = false; spin_lock(&p->br->multicast_lock); - if (p->multicast_ctx.multicast_router == val) { + if (pmctx->multicast_router == val) { /* Refresh the temp router port timer */ - if (p->multicast_ctx.multicast_router == MDB_RTR_TYPE_TEMP) { - mod_timer(&p->multicast_ctx.ip4_mc_router_timer, + if (pmctx->multicast_router == MDB_RTR_TYPE_TEMP) { + mod_timer(&pmctx->ip4_mc_router_timer, now + brmctx->multicast_querier_interval); #if IS_ENABLED(CONFIG_IPV6) - mod_timer(&p->multicast_ctx.ip6_mc_router_timer, + mod_timer(&pmctx->ip6_mc_router_timer, now + brmctx->multicast_querier_interval); #endif } @@ -3768,34 +3808,34 @@ int br_multicast_set_port_router(struct net_bridge_port *p, unsigned long val) } switch (val) { case MDB_RTR_TYPE_DISABLED: - p->multicast_ctx.multicast_router = MDB_RTR_TYPE_DISABLED; - del |= br_ip4_multicast_rport_del(p); - del_timer(&p->multicast_ctx.ip4_mc_router_timer); - del |= br_ip6_multicast_rport_del(p); + pmctx->multicast_router = MDB_RTR_TYPE_DISABLED; + del |= br_ip4_multicast_rport_del(pmctx); + del_timer(&pmctx->ip4_mc_router_timer); + del |= br_ip6_multicast_rport_del(pmctx); #if IS_ENABLED(CONFIG_IPV6) - del_timer(&p->multicast_ctx.ip6_mc_router_timer); + del_timer(&pmctx->ip6_mc_router_timer); #endif - br_multicast_rport_del_notify(p, del); + br_multicast_rport_del_notify(pmctx, del); break; case MDB_RTR_TYPE_TEMP_QUERY: - p->multicast_ctx.multicast_router = MDB_RTR_TYPE_TEMP_QUERY; - del |= br_ip4_multicast_rport_del(p); - del |= br_ip6_multicast_rport_del(p); - br_multicast_rport_del_notify(p, del); + pmctx->multicast_router = MDB_RTR_TYPE_TEMP_QUERY; + del |= br_ip4_multicast_rport_del(pmctx); + del |= br_ip6_multicast_rport_del(pmctx); + br_multicast_rport_del_notify(pmctx, del); break; case MDB_RTR_TYPE_PERM: - p->multicast_ctx.multicast_router = MDB_RTR_TYPE_PERM; - del_timer(&p->multicast_ctx.ip4_mc_router_timer); - br_ip4_multicast_add_router(p->br, p); + pmctx->multicast_router = MDB_RTR_TYPE_PERM; + del_timer(&pmctx->ip4_mc_router_timer); + br_ip4_multicast_add_router(brmctx, pmctx); #if IS_ENABLED(CONFIG_IPV6) - del_timer(&p->multicast_ctx.ip6_mc_router_timer); + del_timer(&pmctx->ip6_mc_router_timer); #endif - br_ip6_multicast_add_router(p->br, p); + br_ip6_multicast_add_router(brmctx, pmctx); break; case MDB_RTR_TYPE_TEMP: - p->multicast_ctx.multicast_router = MDB_RTR_TYPE_TEMP; - br_ip4_multicast_mark_router(p->br, p); - br_ip6_multicast_mark_router(p->br, p); + pmctx->multicast_router = MDB_RTR_TYPE_TEMP; + br_ip4_multicast_mark_router(brmctx, pmctx); + br_ip6_multicast_mark_router(brmctx, pmctx); break; default: goto unlock; @@ -3807,20 +3847,20 @@ int br_multicast_set_port_router(struct net_bridge_port *p, unsigned long val) return err; } -static void br_multicast_start_querier(struct net_bridge *br, +static void br_multicast_start_querier(struct net_bridge_mcast *brmctx, struct bridge_mcast_own_query *query) { struct net_bridge_port *port; - __br_multicast_open(br, query); + __br_multicast_open(brmctx->br, query); rcu_read_lock(); - list_for_each_entry_rcu(port, &br->port_list, list) { + list_for_each_entry_rcu(port, &brmctx->br->port_list, list) { if (port->state == BR_STATE_DISABLED || port->state == BR_STATE_BLOCKING) continue; - if (query == &br->multicast_ctx.ip4_own_query) + if (query == &brmctx->ip4_own_query) br_multicast_enable(&port->multicast_ctx.ip4_own_query); #if IS_ENABLED(CONFIG_IPV6) else @@ -3858,7 +3898,7 @@ int br_multicast_toggle(struct net_bridge *br, unsigned long val, br_multicast_open(br); list_for_each_entry(port, &br->port_list, list) - __br_multicast_enable_port(port); + __br_multicast_enable_port_ctx(&port->multicast_ctx); change_snoopers = true; @@ -3901,7 +3941,7 @@ bool br_multicast_router(const struct net_device *dev) bool is_router; spin_lock_bh(&br->multicast_lock); - is_router = br_multicast_is_router(br, NULL); + is_router = br_multicast_is_router(&br->multicast_ctx, NULL); spin_unlock_bh(&br->multicast_lock); return is_router; } @@ -3927,13 +3967,13 @@ int br_multicast_set_querier(struct net_bridge *br, unsigned long val) if (!timer_pending(&brmctx->ip4_other_query.timer)) brmctx->ip4_other_query.delay_time = jiffies + max_delay; - br_multicast_start_querier(br, &brmctx->ip4_own_query); + br_multicast_start_querier(brmctx, &brmctx->ip4_own_query); #if IS_ENABLED(CONFIG_IPV6) if (!timer_pending(&brmctx->ip6_other_query.timer)) brmctx->ip6_other_query.delay_time = jiffies + max_delay; - br_multicast_start_querier(br, &brmctx->ip6_own_query); + br_multicast_start_querier(brmctx, &brmctx->ip6_own_query); #endif unlock: @@ -4066,7 +4106,7 @@ bool br_multicast_has_querier_anywhere(struct net_device *dev, int proto) memset(ð, 0, sizeof(eth)); eth.h_proto = htons(proto); - ret = br_multicast_querier_exists(br, ð, NULL); + ret = br_multicast_querier_exists(&br->multicast_ctx, ð, NULL); unlock: rcu_read_unlock(); @@ -4254,7 +4294,8 @@ static void br_mcast_stats_add(struct bridge_mcast_stats __percpu *stats, u64_stats_update_end(&pstats->syncp); } -void br_multicast_count(struct net_bridge *br, const struct net_bridge_port *p, +void br_multicast_count(struct net_bridge *br, + const struct net_bridge_port *p, const struct sk_buff *skb, u8 type, u8 dir) { struct bridge_mcast_stats __percpu *stats; diff --git a/net/bridge/br_multicast_eht.c b/net/bridge/br_multicast_eht.c index 13290a749d09..f91c071d1608 100644 --- a/net/bridge/br_multicast_eht.c +++ b/net/bridge/br_multicast_eht.c @@ -33,7 +33,8 @@ static bool br_multicast_del_eht_set_entry(struct net_bridge_port_group *pg, union net_bridge_eht_addr *src_addr, union net_bridge_eht_addr *h_addr); -static void br_multicast_create_eht_set_entry(struct net_bridge_port_group *pg, +static void br_multicast_create_eht_set_entry(const struct net_bridge_mcast *brmctx, + struct net_bridge_port_group *pg, union net_bridge_eht_addr *src_addr, union net_bridge_eht_addr *h_addr, int filter_mode, @@ -388,7 +389,8 @@ static void br_multicast_ip_src_to_eht_addr(const struct br_ip *src, } } -static void br_eht_convert_host_filter_mode(struct net_bridge_port_group *pg, +static void br_eht_convert_host_filter_mode(const struct net_bridge_mcast *brmctx, + struct net_bridge_port_group *pg, union net_bridge_eht_addr *h_addr, int filter_mode) { @@ -405,14 +407,15 @@ static void br_eht_convert_host_filter_mode(struct net_bridge_port_group *pg, br_multicast_del_eht_set_entry(pg, &zero_addr, h_addr); break; case MCAST_EXCLUDE: - br_multicast_create_eht_set_entry(pg, &zero_addr, h_addr, - MCAST_EXCLUDE, + br_multicast_create_eht_set_entry(brmctx, pg, &zero_addr, + h_addr, MCAST_EXCLUDE, true); break; } } -static void br_multicast_create_eht_set_entry(struct net_bridge_port_group *pg, +static void br_multicast_create_eht_set_entry(const struct net_bridge_mcast *brmctx, + struct net_bridge_port_group *pg, union net_bridge_eht_addr *src_addr, union net_bridge_eht_addr *h_addr, int filter_mode, @@ -441,8 +444,8 @@ static void br_multicast_create_eht_set_entry(struct net_bridge_port_group *pg, if (!set_h) goto fail_set_entry; - mod_timer(&set_h->timer, jiffies + br_multicast_gmi(br)); - mod_timer(&eht_set->timer, jiffies + br_multicast_gmi(br)); + mod_timer(&set_h->timer, jiffies + br_multicast_gmi(brmctx)); + mod_timer(&eht_set->timer, jiffies + br_multicast_gmi(brmctx)); return; @@ -499,7 +502,8 @@ static void br_multicast_del_eht_host(struct net_bridge_port_group *pg, } /* create new set entries from reports */ -static void __eht_create_set_entries(struct net_bridge_port_group *pg, +static void __eht_create_set_entries(const struct net_bridge_mcast *brmctx, + struct net_bridge_port_group *pg, union net_bridge_eht_addr *h_addr, void *srcs, u32 nsrcs, @@ -512,8 +516,8 @@ static void __eht_create_set_entries(struct net_bridge_port_group *pg, memset(&eht_src_addr, 0, sizeof(eht_src_addr)); for (src_idx = 0; src_idx < nsrcs; src_idx++) { memcpy(&eht_src_addr, srcs + (src_idx * addr_size), addr_size); - br_multicast_create_eht_set_entry(pg, &eht_src_addr, h_addr, - filter_mode, + br_multicast_create_eht_set_entry(brmctx, pg, &eht_src_addr, + h_addr, filter_mode, false); } } @@ -549,7 +553,8 @@ static bool __eht_del_set_entries(struct net_bridge_port_group *pg, return changed; } -static bool br_multicast_eht_allow(struct net_bridge_port_group *pg, +static bool br_multicast_eht_allow(const struct net_bridge_mcast *brmctx, + struct net_bridge_port_group *pg, union net_bridge_eht_addr *h_addr, void *srcs, u32 nsrcs, @@ -559,8 +564,8 @@ static bool br_multicast_eht_allow(struct net_bridge_port_group *pg, switch (br_multicast_eht_host_filter_mode(pg, h_addr)) { case MCAST_INCLUDE: - __eht_create_set_entries(pg, h_addr, srcs, nsrcs, addr_size, - MCAST_INCLUDE); + __eht_create_set_entries(brmctx, pg, h_addr, srcs, nsrcs, + addr_size, MCAST_INCLUDE); break; case MCAST_EXCLUDE: changed = __eht_del_set_entries(pg, h_addr, srcs, nsrcs, @@ -571,7 +576,8 @@ static bool br_multicast_eht_allow(struct net_bridge_port_group *pg, return changed; } -static bool br_multicast_eht_block(struct net_bridge_port_group *pg, +static bool br_multicast_eht_block(const struct net_bridge_mcast *brmctx, + struct net_bridge_port_group *pg, union net_bridge_eht_addr *h_addr, void *srcs, u32 nsrcs, @@ -585,7 +591,7 @@ static bool br_multicast_eht_block(struct net_bridge_port_group *pg, addr_size); break; case MCAST_EXCLUDE: - __eht_create_set_entries(pg, h_addr, srcs, nsrcs, addr_size, + __eht_create_set_entries(brmctx, pg, h_addr, srcs, nsrcs, addr_size, MCAST_EXCLUDE); break; } @@ -594,7 +600,8 @@ static bool br_multicast_eht_block(struct net_bridge_port_group *pg, } /* flush_entries is true when changing mode */ -static bool __eht_inc_exc(struct net_bridge_port_group *pg, +static bool __eht_inc_exc(const struct net_bridge_mcast *brmctx, + struct net_bridge_port_group *pg, union net_bridge_eht_addr *h_addr, void *srcs, u32 nsrcs, @@ -612,11 +619,10 @@ static bool __eht_inc_exc(struct net_bridge_port_group *pg, /* if we're changing mode del host and its entries */ if (flush_entries) br_multicast_del_eht_host(pg, h_addr); - __eht_create_set_entries(pg, h_addr, srcs, nsrcs, addr_size, + __eht_create_set_entries(brmctx, pg, h_addr, srcs, nsrcs, addr_size, filter_mode); /* we can be missing sets only if we've deleted some entries */ if (flush_entries) { - struct net_bridge *br = pg->key.port->br; struct net_bridge_group_eht_set *eht_set; struct net_bridge_group_src *src_ent; struct hlist_node *tmp; @@ -647,14 +653,15 @@ static bool __eht_inc_exc(struct net_bridge_port_group *pg, &eht_src_addr); if (!eht_set) continue; - mod_timer(&eht_set->timer, jiffies + br_multicast_lmqt(br)); + mod_timer(&eht_set->timer, jiffies + br_multicast_lmqt(brmctx)); } } return changed; } -static bool br_multicast_eht_inc(struct net_bridge_port_group *pg, +static bool br_multicast_eht_inc(const struct net_bridge_mcast *brmctx, + struct net_bridge_port_group *pg, union net_bridge_eht_addr *h_addr, void *srcs, u32 nsrcs, @@ -663,14 +670,15 @@ static bool br_multicast_eht_inc(struct net_bridge_port_group *pg, { bool changed; - changed = __eht_inc_exc(pg, h_addr, srcs, nsrcs, addr_size, + changed = __eht_inc_exc(brmctx, pg, h_addr, srcs, nsrcs, addr_size, MCAST_INCLUDE, to_report); - br_eht_convert_host_filter_mode(pg, h_addr, MCAST_INCLUDE); + br_eht_convert_host_filter_mode(brmctx, pg, h_addr, MCAST_INCLUDE); return changed; } -static bool br_multicast_eht_exc(struct net_bridge_port_group *pg, +static bool br_multicast_eht_exc(const struct net_bridge_mcast *brmctx, + struct net_bridge_port_group *pg, union net_bridge_eht_addr *h_addr, void *srcs, u32 nsrcs, @@ -679,14 +687,15 @@ static bool br_multicast_eht_exc(struct net_bridge_port_group *pg, { bool changed; - changed = __eht_inc_exc(pg, h_addr, srcs, nsrcs, addr_size, + changed = __eht_inc_exc(brmctx, pg, h_addr, srcs, nsrcs, addr_size, MCAST_EXCLUDE, to_report); - br_eht_convert_host_filter_mode(pg, h_addr, MCAST_EXCLUDE); + br_eht_convert_host_filter_mode(brmctx, pg, h_addr, MCAST_EXCLUDE); return changed; } -static bool __eht_ip4_handle(struct net_bridge_port_group *pg, +static bool __eht_ip4_handle(const struct net_bridge_mcast *brmctx, + struct net_bridge_port_group *pg, union net_bridge_eht_addr *h_addr, void *srcs, u32 nsrcs, @@ -696,24 +705,25 @@ static bool __eht_ip4_handle(struct net_bridge_port_group *pg, switch (grec_type) { case IGMPV3_ALLOW_NEW_SOURCES: - br_multicast_eht_allow(pg, h_addr, srcs, nsrcs, sizeof(__be32)); + br_multicast_eht_allow(brmctx, pg, h_addr, srcs, nsrcs, + sizeof(__be32)); break; case IGMPV3_BLOCK_OLD_SOURCES: - changed = br_multicast_eht_block(pg, h_addr, srcs, nsrcs, + changed = br_multicast_eht_block(brmctx, pg, h_addr, srcs, nsrcs, sizeof(__be32)); break; case IGMPV3_CHANGE_TO_INCLUDE: to_report = true; fallthrough; case IGMPV3_MODE_IS_INCLUDE: - changed = br_multicast_eht_inc(pg, h_addr, srcs, nsrcs, + changed = br_multicast_eht_inc(brmctx, pg, h_addr, srcs, nsrcs, sizeof(__be32), to_report); break; case IGMPV3_CHANGE_TO_EXCLUDE: to_report = true; fallthrough; case IGMPV3_MODE_IS_EXCLUDE: - changed = br_multicast_eht_exc(pg, h_addr, srcs, nsrcs, + changed = br_multicast_eht_exc(brmctx, pg, h_addr, srcs, nsrcs, sizeof(__be32), to_report); break; } @@ -722,7 +732,8 @@ static bool __eht_ip4_handle(struct net_bridge_port_group *pg, } #if IS_ENABLED(CONFIG_IPV6) -static bool __eht_ip6_handle(struct net_bridge_port_group *pg, +static bool __eht_ip6_handle(const struct net_bridge_mcast *brmctx, + struct net_bridge_port_group *pg, union net_bridge_eht_addr *h_addr, void *srcs, u32 nsrcs, @@ -732,18 +743,18 @@ static bool __eht_ip6_handle(struct net_bridge_port_group *pg, switch (grec_type) { case MLD2_ALLOW_NEW_SOURCES: - br_multicast_eht_allow(pg, h_addr, srcs, nsrcs, + br_multicast_eht_allow(brmctx, pg, h_addr, srcs, nsrcs, sizeof(struct in6_addr)); break; case MLD2_BLOCK_OLD_SOURCES: - changed = br_multicast_eht_block(pg, h_addr, srcs, nsrcs, + changed = br_multicast_eht_block(brmctx, pg, h_addr, srcs, nsrcs, sizeof(struct in6_addr)); break; case MLD2_CHANGE_TO_INCLUDE: to_report = true; fallthrough; case MLD2_MODE_IS_INCLUDE: - changed = br_multicast_eht_inc(pg, h_addr, srcs, nsrcs, + changed = br_multicast_eht_inc(brmctx, pg, h_addr, srcs, nsrcs, sizeof(struct in6_addr), to_report); break; @@ -751,7 +762,7 @@ static bool __eht_ip6_handle(struct net_bridge_port_group *pg, to_report = true; fallthrough; case MLD2_MODE_IS_EXCLUDE: - changed = br_multicast_eht_exc(pg, h_addr, srcs, nsrcs, + changed = br_multicast_eht_exc(brmctx, pg, h_addr, srcs, nsrcs, sizeof(struct in6_addr), to_report); break; @@ -762,7 +773,8 @@ static bool __eht_ip6_handle(struct net_bridge_port_group *pg, #endif /* true means an entry was deleted */ -bool br_multicast_eht_handle(struct net_bridge_port_group *pg, +bool br_multicast_eht_handle(const struct net_bridge_mcast *brmctx, + struct net_bridge_port_group *pg, void *h_addr, void *srcs, u32 nsrcs, @@ -779,12 +791,12 @@ bool br_multicast_eht_handle(struct net_bridge_port_group *pg, memset(&eht_host_addr, 0, sizeof(eht_host_addr)); memcpy(&eht_host_addr, h_addr, addr_size); if (addr_size == sizeof(__be32)) - changed = __eht_ip4_handle(pg, &eht_host_addr, srcs, nsrcs, - grec_type); + changed = __eht_ip4_handle(brmctx, pg, &eht_host_addr, srcs, + nsrcs, grec_type); #if IS_ENABLED(CONFIG_IPV6) else - changed = __eht_ip6_handle(pg, &eht_host_addr, srcs, nsrcs, - grec_type); + changed = __eht_ip6_handle(brmctx, pg, &eht_host_addr, srcs, + nsrcs, grec_type); #endif out: diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h index badd490fce7a..89e942789b12 100644 --- a/net/bridge/br_private.h +++ b/net/bridge/br_private.h @@ -830,9 +830,10 @@ int br_ioctl_deviceless_stub(struct net *net, unsigned int cmd, /* br_multicast.c */ #ifdef CONFIG_BRIDGE_IGMP_SNOOPING -int br_multicast_rcv(struct net_bridge *br, struct net_bridge_port *port, +int br_multicast_rcv(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, struct sk_buff *skb, u16 vid); -struct net_bridge_mdb_entry *br_mdb_get(struct net_bridge *br, +struct net_bridge_mdb_entry *br_mdb_get(struct net_bridge_mcast *brmctx, struct sk_buff *skb, u16 vid); int br_multicast_add_port(struct net_bridge_port *port); void br_multicast_del_port(struct net_bridge_port *port); @@ -844,8 +845,9 @@ void br_multicast_leave_snoopers(struct net_bridge *br); void br_multicast_open(struct net_bridge *br); void br_multicast_stop(struct net_bridge *br); void br_multicast_dev_del(struct net_bridge *br); -void br_multicast_flood(struct net_bridge_mdb_entry *mdst, - struct sk_buff *skb, bool local_rcv, bool local_orig); +void br_multicast_flood(struct net_bridge_mdb_entry *mdst, struct sk_buff *skb, + struct net_bridge_mcast *brmctx, + bool local_rcv, bool local_orig); int br_multicast_set_router(struct net_bridge *br, unsigned long val); int br_multicast_set_port_router(struct net_bridge_port *p, unsigned long val); int br_multicast_toggle(struct net_bridge *br, unsigned long val, @@ -874,7 +876,8 @@ void br_rtr_notify(struct net_device *dev, struct net_bridge_port *port, void br_multicast_del_pg(struct net_bridge_mdb_entry *mp, struct net_bridge_port_group *pg, struct net_bridge_port_group __rcu **pp); -void br_multicast_count(struct net_bridge *br, const struct net_bridge_port *p, +void br_multicast_count(struct net_bridge *br, + const struct net_bridge_port *p, const struct sk_buff *skb, u8 type, u8 dir); int br_multicast_init_stats(struct net_bridge *br); void br_multicast_uninit_stats(struct net_bridge *br); @@ -903,10 +906,9 @@ static inline bool br_group_is_l2(const struct br_ip *group) rcu_dereference_protected(X, lockdep_is_held(&br->multicast_lock)) static inline struct hlist_node * -br_multicast_get_first_rport_node(struct net_bridge *br, struct sk_buff *skb) +br_multicast_get_first_rport_node(struct net_bridge_mcast *brmctx, + struct sk_buff *skb) { - struct net_bridge_mcast *brmctx = &br->multicast_ctx; - #if IS_ENABLED(CONFIG_IPV6) if (skb->protocol == htons(ETH_P_IPV6)) return rcu_dereference(hlist_first_rcu(&brmctx->ip6_mc_router_list)); @@ -949,10 +951,8 @@ static inline bool br_ip6_multicast_is_router(struct net_bridge_mcast *brmctx) } static inline bool -br_multicast_is_router(struct net_bridge *br, struct sk_buff *skb) +br_multicast_is_router(struct net_bridge_mcast *brmctx, struct sk_buff *skb) { - struct net_bridge_mcast *brmctx = &br->multicast_ctx; - switch (brmctx->multicast_router) { case MDB_RTR_TYPE_PERM: return true; @@ -973,14 +973,14 @@ br_multicast_is_router(struct net_bridge *br, struct sk_buff *skb) } static inline bool -__br_multicast_querier_exists(struct net_bridge *br, - struct bridge_mcast_other_query *querier, - const bool is_ipv6) +__br_multicast_querier_exists(struct net_bridge_mcast *brmctx, + struct bridge_mcast_other_query *querier, + const bool is_ipv6) { bool own_querier_enabled; - if (br_opt_get(br, BROPT_MULTICAST_QUERIER)) { - if (is_ipv6 && !br_opt_get(br, BROPT_HAS_IPV6_ADDR)) + if (br_opt_get(brmctx->br, BROPT_MULTICAST_QUERIER)) { + if (is_ipv6 && !br_opt_get(brmctx->br, BROPT_HAS_IPV6_ADDR)) own_querier_enabled = false; else own_querier_enabled = true; @@ -992,18 +992,18 @@ __br_multicast_querier_exists(struct net_bridge *br, (own_querier_enabled || timer_pending(&querier->timer)); } -static inline bool br_multicast_querier_exists(struct net_bridge *br, +static inline bool br_multicast_querier_exists(struct net_bridge_mcast *brmctx, struct ethhdr *eth, const struct net_bridge_mdb_entry *mdb) { switch (eth->h_proto) { case (htons(ETH_P_IP)): - return __br_multicast_querier_exists(br, - &br->multicast_ctx.ip4_other_query, false); + return __br_multicast_querier_exists(brmctx, + &brmctx->ip4_other_query, false); #if IS_ENABLED(CONFIG_IPV6) case (htons(ETH_P_IPV6)): - return __br_multicast_querier_exists(br, - &br->multicast_ctx.ip6_other_query, true); + return __br_multicast_querier_exists(brmctx, + &brmctx->ip6_other_query, true); #endif default: return !!mdb && br_group_is_l2(&mdb->addr); @@ -1024,15 +1024,16 @@ static inline bool br_multicast_is_star_g(const struct br_ip *ip) } } -static inline bool br_multicast_should_handle_mode(const struct net_bridge *br, - __be16 proto) +static inline bool +br_multicast_should_handle_mode(const struct net_bridge_mcast *brmctx, + __be16 proto) { switch (proto) { case htons(ETH_P_IP): - return !!(br->multicast_ctx.multicast_igmp_version == 3); + return !!(brmctx->multicast_igmp_version == 3); #if IS_ENABLED(CONFIG_IPV6) case htons(ETH_P_IPV6): - return !!(br->multicast_ctx.multicast_mld_version == 2); + return !!(brmctx->multicast_mld_version == 2); #endif default: return false; @@ -1044,28 +1045,28 @@ static inline int br_multicast_igmp_type(const struct sk_buff *skb) return BR_INPUT_SKB_CB(skb)->igmp; } -static inline unsigned long br_multicast_lmqt(const struct net_bridge *br) +static inline unsigned long br_multicast_lmqt(const struct net_bridge_mcast *brmctx) { - return br->multicast_ctx.multicast_last_member_interval * - br->multicast_ctx.multicast_last_member_count; + return brmctx->multicast_last_member_interval * + brmctx->multicast_last_member_count; } -static inline unsigned long br_multicast_gmi(const struct net_bridge *br) +static inline unsigned long br_multicast_gmi(const struct net_bridge_mcast *brmctx) { /* use the RFC default of 2 for QRV */ - return 2 * br->multicast_ctx.multicast_query_interval + - br->multicast_ctx.multicast_query_response_interval; + return 2 * brmctx->multicast_query_interval + + brmctx->multicast_query_response_interval; } #else -static inline int br_multicast_rcv(struct net_bridge *br, - struct net_bridge_port *port, +static inline int br_multicast_rcv(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, struct sk_buff *skb, u16 vid) { return 0; } -static inline struct net_bridge_mdb_entry *br_mdb_get(struct net_bridge *br, +static inline struct net_bridge_mdb_entry *br_mdb_get(struct net_bridge_mcast *brmctx, struct sk_buff *skb, u16 vid) { return NULL; @@ -1114,17 +1115,18 @@ static inline void br_multicast_dev_del(struct net_bridge *br) static inline void br_multicast_flood(struct net_bridge_mdb_entry *mdst, struct sk_buff *skb, + struct net_bridge_mcast *brmctx, bool local_rcv, bool local_orig) { } -static inline bool br_multicast_is_router(struct net_bridge *br, +static inline bool br_multicast_is_router(struct net_bridge_mcast *brmctx, struct sk_buff *skb) { return false; } -static inline bool br_multicast_querier_exists(struct net_bridge *br, +static inline bool br_multicast_querier_exists(struct net_bridge_mcast *brmctx, struct ethhdr *eth, const struct net_bridge_mdb_entry *mdb) { diff --git a/net/bridge/br_private_mcast_eht.h b/net/bridge/br_private_mcast_eht.h index f89049f4892c..adf82a05515a 100644 --- a/net/bridge/br_private_mcast_eht.h +++ b/net/bridge/br_private_mcast_eht.h @@ -51,7 +51,8 @@ struct net_bridge_group_eht_set { #ifdef CONFIG_BRIDGE_IGMP_SNOOPING void br_multicast_eht_clean_sets(struct net_bridge_port_group *pg); -bool br_multicast_eht_handle(struct net_bridge_port_group *pg, +bool br_multicast_eht_handle(const struct net_bridge_mcast *brmctx, + struct net_bridge_port_group *pg, void *h_addr, void *srcs, u32 nsrcs, From patchwork Mon Jul 19 17:06:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nikolay Aleksandrov X-Patchwork-Id: 12386391 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CEA8BC64E73 for ; Mon, 19 Jul 2021 17:22:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1E0566108B for ; Mon, 19 Jul 2021 17:22:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1355704AbhGSQjg (ORCPT ); Mon, 19 Jul 2021 12:39:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36220 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1355637AbhGSQg2 (ORCPT ); Mon, 19 Jul 2021 12:36:28 -0400 Received: from mail-ed1-x533.google.com (mail-ed1-x533.google.com [IPv6:2a00:1450:4864:20::533]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0B132C04F96E for ; Mon, 19 Jul 2021 09:49:48 -0700 (PDT) Received: by mail-ed1-x533.google.com with SMTP id h2so25005876edt.3 for ; Mon, 19 Jul 2021 10:10:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=blackwall-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=pCbYGsprdytj3EvRRcODlsGbiMhI5KXaiaOiL7iXC8A=; b=BATEKeYNVFOns/5zOxpu8Gre53gjPPn25jva7ZVe0d2suITcTHJbNp3BkN5Y5lk3Ku 05H56ocpECHruJhTR/Vc13f4B4K/OXVpjsibrI+7eS6OGEy18aNza8L5yFjR6m2gi0bR kGsxUi3FM5FbBgWuW+PSqYLHqU2wlI3eLkJMEKQaXP3fvipNOypXYP+rLVPuumeELJW2 VXjDCkEbo3s1okxZqjfCTPUcYuxtSK/ZLFH9b+8OUtiFZ9rHAaWQiES6bKth+UJev+nu DbgfS5W6+vWKhgCF5X0totrfc8RKSE4x4IHQgHaaVIx2WvJdtlauO1FBvP8lD4FMEBg1 ldiw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=pCbYGsprdytj3EvRRcODlsGbiMhI5KXaiaOiL7iXC8A=; b=It2OntSBEV6ozkP9iHkZT2sKfso+S6VzkJQygZhsI3JTS77qOx0G+Dfld4NrUBfCPC k6qAykT0KwJgiUQMo0dh0coNEoyqLv1a7JCtWPVQ4quJg1SUJmT6uJtzmTwyo4GxzFtx LuA3xbAZYo7yWnyQ0uaerqBiVMXVB48Fl84nkB/r1F426OjDK7cIHgdEdRf1exJA5gs7 ldfkfay2vOeS6HDtxaGUFuFjMj9yPmBGlzjzp4WTFWhss+Zns1yxWWwwDBPZm2k946gT EL5vzqI/49j+v13DhqEWhfVnGs09EyEIE9L4zaHaUs5IRT8NWWGOghfqah3PWm8QHnJH T3OA== X-Gm-Message-State: AOAM5306bph45foCtALQvkoCHdEWcibVFa/Ze/vC4JlRlfl3G8o0U4JG D9o3asPoJW/97ia9CFr09Q0U9JAAx7zuA5Thv88= X-Google-Smtp-Source: ABdhPJz/XzrnzG0QZEi8qFuujfY3KoPOZGWXfKJvzQVvuigc5OAgDG9ZlL7ldF4aszMKxeBG+b4K0Q== X-Received: by 2002:a05:6402:5170:: with SMTP id d16mr35397594ede.300.1626714600480; Mon, 19 Jul 2021 10:10:00 -0700 (PDT) Received: from debil.vdiclient.nvidia.com (84-238-136-197.ip.btc-net.bg. [84.238.136.197]) by smtp.gmail.com with ESMTPSA id nc29sm6073896ejc.10.2021.07.19.10.09.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Jul 2021 10:10:00 -0700 (PDT) From: Nikolay Aleksandrov To: netdev@vger.kernel.org Cc: roopa@nvidia.com, bridge@lists.linux-foundation.org, Nikolay Aleksandrov Subject: [PATCH net-next 04/15] net: bridge: vlan: add global and per-port multicast context Date: Mon, 19 Jul 2021 20:06:26 +0300 Message-Id: <20210719170637.435541-5-razor@blackwall.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210719170637.435541-1-razor@blackwall.org> References: <20210719170637.435541-1-razor@blackwall.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Nikolay Aleksandrov Add global and per-port vlan multicast context, only initialized but still not used. No functional changes intended. Signed-off-by: Nikolay Aleksandrov --- net/bridge/br_multicast.c | 104 +++++++++++++++++++++++--------------- net/bridge/br_private.h | 38 ++++++++++++++ net/bridge/br_vlan.c | 4 ++ 3 files changed, 106 insertions(+), 40 deletions(-) diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c index 64145e48a0a5..6f803f789217 100644 --- a/net/bridge/br_multicast.c +++ b/net/bridge/br_multicast.c @@ -80,6 +80,7 @@ __br_multicast_add_group(struct net_bridge_mcast *brmctx, bool blocked); static void br_multicast_find_del_pg(struct net_bridge *br, struct net_bridge_port_group *pg); +static void __br_multicast_stop(struct net_bridge_mcast *brmctx); static struct net_bridge_port_group * br_sg_port_find(struct net_bridge *br, @@ -1696,10 +1697,12 @@ static int br_mc_disabled_update(struct net_device *dev, bool value, return switchdev_port_attr_set(dev, &attr, extack); } -static void br_multicast_port_ctx_init(struct net_bridge_port *port, - struct net_bridge_mcast_port *pmctx) +void br_multicast_port_ctx_init(struct net_bridge_port *port, + struct net_bridge_vlan *vlan, + struct net_bridge_mcast_port *pmctx) { pmctx->port = port; + pmctx->vlan = vlan; pmctx->multicast_router = MDB_RTR_TYPE_TEMP_QUERY; timer_setup(&pmctx->ip4_mc_router_timer, br_ip4_multicast_router_expired, 0); @@ -1713,7 +1716,7 @@ static void br_multicast_port_ctx_init(struct net_bridge_port *port, #endif } -static void br_multicast_port_ctx_deinit(struct net_bridge_mcast_port *pmctx) +void br_multicast_port_ctx_deinit(struct net_bridge_mcast_port *pmctx) { #if IS_ENABLED(CONFIG_IPV6) del_timer_sync(&pmctx->ip6_mc_router_timer); @@ -1726,7 +1729,7 @@ int br_multicast_add_port(struct net_bridge_port *port) int err; port->multicast_eht_hosts_limit = BR_MCAST_DEFAULT_EHT_HOSTS_LIMIT; - br_multicast_port_ctx_init(port, &port->multicast_ctx); + br_multicast_port_ctx_init(port, NULL, &port->multicast_ctx); err = br_mc_disabled_update(port->dev, br_opt_get(port->br, @@ -3571,48 +3574,63 @@ static void br_multicast_gc_work(struct work_struct *work) br_multicast_gc(&deleted_head); } -void br_multicast_init(struct net_bridge *br) +void br_multicast_ctx_init(struct net_bridge *br, + struct net_bridge_vlan *vlan, + struct net_bridge_mcast *brmctx) { - br->hash_max = BR_MULTICAST_DEFAULT_HASH_MAX; + brmctx->br = br; + brmctx->vlan = vlan; + brmctx->multicast_router = MDB_RTR_TYPE_TEMP_QUERY; + brmctx->multicast_last_member_count = 2; + brmctx->multicast_startup_query_count = 2; - br->multicast_ctx.br = br; - br->multicast_ctx.multicast_router = MDB_RTR_TYPE_TEMP_QUERY; - br->multicast_ctx.multicast_last_member_count = 2; - br->multicast_ctx.multicast_startup_query_count = 2; - - br->multicast_ctx.multicast_last_member_interval = HZ; - br->multicast_ctx.multicast_query_response_interval = 10 * HZ; - br->multicast_ctx.multicast_startup_query_interval = 125 * HZ / 4; - br->multicast_ctx.multicast_query_interval = 125 * HZ; - br->multicast_ctx.multicast_querier_interval = 255 * HZ; - br->multicast_ctx.multicast_membership_interval = 260 * HZ; - - br->multicast_ctx.ip4_other_query.delay_time = 0; - br->multicast_ctx.ip4_querier.port = NULL; - br->multicast_ctx.multicast_igmp_version = 2; + brmctx->multicast_last_member_interval = HZ; + brmctx->multicast_query_response_interval = 10 * HZ; + brmctx->multicast_startup_query_interval = 125 * HZ / 4; + brmctx->multicast_query_interval = 125 * HZ; + brmctx->multicast_querier_interval = 255 * HZ; + brmctx->multicast_membership_interval = 260 * HZ; + + brmctx->ip4_other_query.delay_time = 0; + brmctx->ip4_querier.port = NULL; + brmctx->multicast_igmp_version = 2; #if IS_ENABLED(CONFIG_IPV6) - br->multicast_ctx.multicast_mld_version = 1; - br->multicast_ctx.ip6_other_query.delay_time = 0; - br->multicast_ctx.ip6_querier.port = NULL; + brmctx->multicast_mld_version = 1; + brmctx->ip6_other_query.delay_time = 0; + brmctx->ip6_querier.port = NULL; #endif - br_opt_toggle(br, BROPT_MULTICAST_ENABLED, true); - br_opt_toggle(br, BROPT_HAS_IPV6_ADDR, true); - spin_lock_init(&br->multicast_lock); - timer_setup(&br->multicast_ctx.ip4_mc_router_timer, + timer_setup(&brmctx->ip4_mc_router_timer, br_ip4_multicast_local_router_expired, 0); - timer_setup(&br->multicast_ctx.ip4_other_query.timer, + timer_setup(&brmctx->ip4_other_query.timer, br_ip4_multicast_querier_expired, 0); - timer_setup(&br->multicast_ctx.ip4_own_query.timer, + timer_setup(&brmctx->ip4_own_query.timer, br_ip4_multicast_query_expired, 0); #if IS_ENABLED(CONFIG_IPV6) - timer_setup(&br->multicast_ctx.ip6_mc_router_timer, + timer_setup(&brmctx->ip6_mc_router_timer, br_ip6_multicast_local_router_expired, 0); - timer_setup(&br->multicast_ctx.ip6_other_query.timer, + timer_setup(&brmctx->ip6_other_query.timer, br_ip6_multicast_querier_expired, 0); - timer_setup(&br->multicast_ctx.ip6_own_query.timer, + timer_setup(&brmctx->ip6_own_query.timer, br_ip6_multicast_query_expired, 0); #endif +} + +void br_multicast_ctx_deinit(struct net_bridge_mcast *brmctx) +{ + __br_multicast_stop(brmctx); +} + +void br_multicast_init(struct net_bridge *br) +{ + br->hash_max = BR_MULTICAST_DEFAULT_HASH_MAX; + + br_multicast_ctx_init(br, NULL, &br->multicast_ctx); + + br_opt_toggle(br, BROPT_MULTICAST_ENABLED, true); + br_opt_toggle(br, BROPT_HAS_IPV6_ADDR, true); + + spin_lock_init(&br->multicast_lock); INIT_HLIST_HEAD(&br->mdb_list); INIT_HLIST_HEAD(&br->mcast_gc_list); INIT_WORK(&br->mcast_gc_work, br_multicast_gc_work); @@ -3699,18 +3717,23 @@ void br_multicast_open(struct net_bridge *br) #endif } -void br_multicast_stop(struct net_bridge *br) +static void __br_multicast_stop(struct net_bridge_mcast *brmctx) { - del_timer_sync(&br->multicast_ctx.ip4_mc_router_timer); - del_timer_sync(&br->multicast_ctx.ip4_other_query.timer); - del_timer_sync(&br->multicast_ctx.ip4_own_query.timer); + del_timer_sync(&brmctx->ip4_mc_router_timer); + del_timer_sync(&brmctx->ip4_other_query.timer); + del_timer_sync(&brmctx->ip4_own_query.timer); #if IS_ENABLED(CONFIG_IPV6) - del_timer_sync(&br->multicast_ctx.ip6_mc_router_timer); - del_timer_sync(&br->multicast_ctx.ip6_other_query.timer); - del_timer_sync(&br->multicast_ctx.ip6_own_query.timer); + del_timer_sync(&brmctx->ip6_mc_router_timer); + del_timer_sync(&brmctx->ip6_other_query.timer); + del_timer_sync(&brmctx->ip6_own_query.timer); #endif } +void br_multicast_stop(struct net_bridge *br) +{ + __br_multicast_stop(&br->multicast_ctx); +} + void br_multicast_dev_del(struct net_bridge *br) { struct net_bridge_mdb_entry *mp; @@ -3723,6 +3746,7 @@ void br_multicast_dev_del(struct net_bridge *br) hlist_move_list(&br->mcast_gc_list, &deleted_head); spin_unlock_bh(&br->multicast_lock); + br_multicast_ctx_deinit(&br->multicast_ctx); br_multicast_gc(&deleted_head); cancel_work_sync(&br->mcast_gc_work); diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h index 89e942789b12..7a0d077c2c39 100644 --- a/net/bridge/br_private.h +++ b/net/bridge/br_private.h @@ -93,6 +93,7 @@ struct bridge_mcast_stats { struct net_bridge_mcast_port { #ifdef CONFIG_BRIDGE_IGMP_SNOOPING struct net_bridge_port *port; + struct net_bridge_vlan *vlan; struct bridge_mcast_own_query ip4_own_query; struct timer_list ip4_mc_router_timer; @@ -110,6 +111,7 @@ struct net_bridge_mcast_port { struct net_bridge_mcast { #ifdef CONFIG_BRIDGE_IGMP_SNOOPING struct net_bridge *br; + struct net_bridge_vlan *vlan; u32 multicast_last_member_count; u32 multicast_startup_query_count; @@ -165,6 +167,9 @@ enum { * @refcnt: if MASTER flag set, this is bumped for each port referencing it * @brvlan: if MASTER flag unset, this points to the global per-VLAN context * for this VLAN entry + * @br_mcast_ctx: if MASTER flag set, this is the global vlan multicast context + * @port_mcast_ctx: if MASTER flag unset, this is the per-port/vlan multicast + * context * @vlist: sorted list of VLAN entries * @rcu: used for entry destruction * @@ -192,6 +197,11 @@ struct net_bridge_vlan { struct br_tunnel_info tinfo; + union { + struct net_bridge_mcast br_mcast_ctx; + struct net_bridge_mcast_port port_mcast_ctx; + }; + struct list_head vlist; struct rcu_head rcu; @@ -896,6 +906,14 @@ struct net_bridge_group_src * br_multicast_find_group_src(struct net_bridge_port_group *pg, struct br_ip *ip); void br_multicast_del_group_src(struct net_bridge_group_src *src, bool fastleave); +void br_multicast_ctx_init(struct net_bridge *br, + struct net_bridge_vlan *vlan, + struct net_bridge_mcast *brmctx); +void br_multicast_ctx_deinit(struct net_bridge_mcast *brmctx); +void br_multicast_port_ctx_init(struct net_bridge_port *port, + struct net_bridge_vlan *vlan, + struct net_bridge_mcast_port *pmctx); +void br_multicast_port_ctx_deinit(struct net_bridge_mcast_port *pmctx); static inline bool br_group_is_l2(const struct br_ip *group) { @@ -1170,6 +1188,26 @@ static inline int br_multicast_igmp_type(const struct sk_buff *skb) { return 0; } + +static inline void br_multicast_ctx_init(struct net_bridge *br, + struct net_bridge_vlan *vlan, + struct net_bridge_mcast *brmctx) +{ +} + +static inline void br_multicast_ctx_deinit(struct net_bridge_mcast *brmctx) +{ +} + +static inline void br_multicast_port_ctx_init(struct net_bridge_port *port, + struct net_bridge_vlan *vlan, + struct net_bridge_mcast_port *pmctx) +{ +} + +static inline void br_multicast_port_ctx_deinit(struct net_bridge_mcast_port *pmctx) +{ +} #endif /* br_vlan.c */ diff --git a/net/bridge/br_vlan.c b/net/bridge/br_vlan.c index a08e9f193009..e7b7bb0a005b 100644 --- a/net/bridge/br_vlan.c +++ b/net/bridge/br_vlan.c @@ -190,6 +190,7 @@ static void br_vlan_put_master(struct net_bridge_vlan *masterv) rhashtable_remove_fast(&vg->vlan_hash, &masterv->vnode, br_vlan_rht_params); __vlan_del_list(masterv); + br_multicast_ctx_deinit(&masterv->br_mcast_ctx); call_rcu(&masterv->rcu, br_master_vlan_rcu_free); } } @@ -280,10 +281,12 @@ static int __vlan_add(struct net_bridge_vlan *v, u16 flags, } else { v->stats = masterv->stats; } + br_multicast_port_ctx_init(p, v, &v->port_mcast_ctx); } else { err = br_switchdev_port_vlan_add(dev, v->vid, flags, extack); if (err && err != -EOPNOTSUPP) goto out; + br_multicast_ctx_init(br, v, &v->br_mcast_ctx); } /* Add the dev mac and count the vlan only if it's usable */ @@ -374,6 +377,7 @@ static int __vlan_del(struct net_bridge_vlan *v) br_vlan_rht_params); __vlan_del_list(v); nbp_vlan_set_vlan_dev_state(p, v->vid); + br_multicast_port_ctx_deinit(&v->port_mcast_ctx); call_rcu(&v->rcu, nbp_vlan_rcu_free); } From patchwork Mon Jul 19 17:06:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nikolay Aleksandrov X-Patchwork-Id: 12386393 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B9BCC63797 for ; Mon, 19 Jul 2021 17:22:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 483B361107 for ; Mon, 19 Jul 2021 17:22:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1355320AbhGSQjb (ORCPT ); Mon, 19 Jul 2021 12:39:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35518 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1355638AbhGSQg2 (ORCPT ); Mon, 19 Jul 2021 12:36:28 -0400 Received: from mail-ej1-x62a.google.com (mail-ej1-x62a.google.com [IPv6:2a00:1450:4864:20::62a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 283E6C04F973 for ; Mon, 19 Jul 2021 09:49:49 -0700 (PDT) Received: by mail-ej1-x62a.google.com with SMTP id dp20so27988704ejc.7 for ; Mon, 19 Jul 2021 10:10:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=blackwall-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=rMOg4u/ufyGzocm7b1ODLZi3F0RCwEsf+WQjWvZrTkM=; b=iDH2ookIezgXKu65gP8cc1DSbEYPk9qw06UXsM86MojRjPXOBRiwwEubqRk0eymtYm Z41lLbtyPgl8P2QMVM9Ed73YZjo+jUFzzOeIqyJjbYF03slynrkuTKeEYcFzmDBKv8cI UlxWP5yvRGxztsLp3Q6HKXJJPUJiQhuY1ehcPoaYuFGvdsRMvgiQgOIUvAieiLM4F4cy tY0Ex9gbjq9AQA3V2CSQX0kStY/D61jBKR6x/DPdyZ4S2ghjhyQguui+YV8SnXajZ1WP ipRIvIZ/3EdOgX2mVox7uCK1i+E6yaL3pQNnPyeM/vocrlVFpRCbTB6UyaRx2Z0S/XB1 TlpA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=rMOg4u/ufyGzocm7b1ODLZi3F0RCwEsf+WQjWvZrTkM=; b=OOlpTT5VD1azN0nBKfIBgITBvz3rHtPm3f2gZ+Ilxa6bNkwZLuXrvnyRTClRmxZpwP Lfy6vDuFTDLafktzxqGZEg9xmHnLN7VDs3spGi6Rpc42YcLP9+avPhgVjLHfYxT0dT0m +1ie/lpKGp32IJETKkWIDP9RKheZfSov1uHgMbnVTE0iZY3qwppMjUx4l0cMzFyDOEHg Xb+thjyWNNm9cEyrrsSWI1cmjl1tzGmPBxG3yGeV9D41QLP4A0ARc+JSxqBoCdQ7IZW/ Gonrt4NekVx7+opJDJI733js7sNdmOSHF7AbGwXNr6IdO37e2JOm8Tm3mnoXSLhgNGGY hBKg== X-Gm-Message-State: AOAM530Vex7JoN7pIVY7FkLdndGVb6JEqfpsKV7I4zl834PRr1K2ZRB0 x9D6SNsUwg3fu4nagJqmmm9iWU7z+6R0eAue0fo= X-Google-Smtp-Source: ABdhPJzgeCm2FxTD1xDd+2QaY9NUzBQIOi8b6CUBrHkH4yat6RSseiUP1pSwxwiMuIb3pswHqBe1sg== X-Received: by 2002:a17:906:c49:: with SMTP id t9mr27649534ejf.405.1626714601335; Mon, 19 Jul 2021 10:10:01 -0700 (PDT) Received: from debil.vdiclient.nvidia.com (84-238-136-197.ip.btc-net.bg. [84.238.136.197]) by smtp.gmail.com with ESMTPSA id nc29sm6073896ejc.10.2021.07.19.10.10.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Jul 2021 10:10:00 -0700 (PDT) From: Nikolay Aleksandrov To: netdev@vger.kernel.org Cc: roopa@nvidia.com, bridge@lists.linux-foundation.org, Nikolay Aleksandrov Subject: [PATCH net-next 05/15] net: bridge: multicast: add vlan state initialization and control Date: Mon, 19 Jul 2021 20:06:27 +0300 Message-Id: <20210719170637.435541-6-razor@blackwall.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210719170637.435541-1-razor@blackwall.org> References: <20210719170637.435541-1-razor@blackwall.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Nikolay Aleksandrov Add helpers to enable/disable vlan multicast based on its flags, we need two flags because we need to know if the vlan has multicast enabled globally (user-controlled) and if it has it enabled on the specific device (bridge or port). The new private vlan flags are: - BR_VLFLAG_MCAST_ENABLED: locally enabled multicast on the device, used when removing a vlan, toggling vlan mcast snooping and controlling single vlan (kernel-controlled, valid under RTNL and multicast_lock) - BR_VLFLAG_GLOBAL_MCAST_ENABLED: globally enabled multicast for the vlan, used to control the bridge-wide vlan mcast snooping for a single vlan (user-controlled, can be checked under any context) Bridge vlan contexts are created with multicast snooping enabled by default to be in line with the current bridge snooping defaults. In order to actually activate per vlan snooping and context usage a bridge-wide knob will be added later which will default to disabled. If that knob is enabled then automatically all vlan snooping will be enabled. All vlan contexts are initialized with the current bridge multicast context defaults. Signed-off-by: Nikolay Aleksandrov --- net/bridge/br_multicast.c | 128 ++++++++++++++++++++++++++++++++------ net/bridge/br_private.h | 50 +++++++++++++++ net/bridge/br_vlan.c | 4 ++ 3 files changed, 164 insertions(+), 18 deletions(-) diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c index 6f803f789217..ef4e7de3f18d 100644 --- a/net/bridge/br_multicast.c +++ b/net/bridge/br_multicast.c @@ -214,7 +214,7 @@ static void __fwd_add_star_excl(struct net_bridge_mcast_port *pmctx, struct net_bridge_mcast *brmctx; memset(&sg_key, 0, sizeof(sg_key)); - brmctx = &pg->key.port->br->multicast_ctx; + brmctx = br_multicast_port_ctx_get_global(pmctx); sg_key.port = pg->key.port; sg_key.addr = *sg_ip; if (br_sg_port_find(brmctx->br, &sg_key)) @@ -275,6 +275,7 @@ void br_multicast_star_g_handle_mode(struct net_bridge_port_group *pg, memset(&sg_ip, 0, sizeof(sg_ip)); sg_ip = pg->key.addr; + for (pg_lst = mlock_dereference(mp->ports, br); pg_lst; pg_lst = mlock_dereference(pg_lst->next, br)) { @@ -435,7 +436,7 @@ static void br_multicast_fwd_src_add(struct net_bridge_group_src *src) memset(&sg_ip, 0, sizeof(sg_ip)); pmctx = &src->pg->key.port->multicast_ctx; - brmctx = &src->br->multicast_ctx; + brmctx = br_multicast_port_ctx_get_global(pmctx); sg_ip = src->pg->key.addr; sg_ip.src = src->addr.src; @@ -1775,9 +1776,11 @@ static void br_multicast_enable(struct bridge_mcast_own_query *query) static void __br_multicast_enable_port_ctx(struct net_bridge_mcast_port *pmctx) { struct net_bridge *br = pmctx->port->br; - struct net_bridge_mcast *brmctx = &pmctx->port->br->multicast_ctx; + struct net_bridge_mcast *brmctx; - if (!br_opt_get(br, BROPT_MULTICAST_ENABLED) || !netif_running(br->dev)) + brmctx = br_multicast_port_ctx_get_global(pmctx); + if (!br_opt_get(br, BROPT_MULTICAST_ENABLED) || + !netif_running(br->dev)) return; br_multicast_enable(&pmctx->ip4_own_query); @@ -1799,18 +1802,17 @@ void br_multicast_enable_port(struct net_bridge_port *port) spin_unlock(&br->multicast_lock); } -void br_multicast_disable_port(struct net_bridge_port *port) +static void __br_multicast_disable_port_ctx(struct net_bridge_mcast_port *pmctx) { - struct net_bridge_mcast_port *pmctx = &port->multicast_ctx; - struct net_bridge *br = port->br; struct net_bridge_port_group *pg; struct hlist_node *n; bool del = false; - spin_lock(&br->multicast_lock); - hlist_for_each_entry_safe(pg, n, &port->mglist, mglist) - if (!(pg->flags & MDB_PG_FLAGS_PERMANENT)) - br_multicast_find_del_pg(br, pg); + hlist_for_each_entry_safe(pg, n, &pmctx->port->mglist, mglist) + if (!(pg->flags & MDB_PG_FLAGS_PERMANENT) && + (!br_multicast_port_ctx_is_vlan(pmctx) || + pg->key.addr.vid == pmctx->vlan->vid)) + br_multicast_find_del_pg(pmctx->port->br, pg); del |= br_ip4_multicast_rport_del(pmctx); del_timer(&pmctx->ip4_mc_router_timer); @@ -1821,7 +1823,13 @@ void br_multicast_disable_port(struct net_bridge_port *port) del_timer(&pmctx->ip6_own_query.timer); #endif br_multicast_rport_del_notify(pmctx, del); - spin_unlock(&br->multicast_lock); +} + +void br_multicast_disable_port(struct net_bridge_port *port) +{ + spin_lock(&port->br->multicast_lock); + __br_multicast_disable_port_ctx(&port->multicast_ctx); + spin_unlock(&port->br->multicast_lock); } static int __grp_src_delete_marked(struct net_bridge_port_group *pg) @@ -3698,8 +3706,8 @@ void br_multicast_leave_snoopers(struct net_bridge *br) br_ip6_multicast_leave_snoopers(br); } -static void __br_multicast_open(struct net_bridge *br, - struct bridge_mcast_own_query *query) +static void __br_multicast_open_query(struct net_bridge *br, + struct bridge_mcast_own_query *query) { query->startup_sent = 0; @@ -3709,14 +3717,36 @@ static void __br_multicast_open(struct net_bridge *br, mod_timer(&query->timer, jiffies); } -void br_multicast_open(struct net_bridge *br) +static void __br_multicast_open(struct net_bridge_mcast *brmctx) { - __br_multicast_open(br, &br->multicast_ctx.ip4_own_query); + __br_multicast_open_query(brmctx->br, &brmctx->ip4_own_query); #if IS_ENABLED(CONFIG_IPV6) - __br_multicast_open(br, &br->multicast_ctx.ip6_own_query); + __br_multicast_open_query(brmctx->br, &brmctx->ip6_own_query); #endif } +void br_multicast_open(struct net_bridge *br) +{ + struct net_bridge_vlan_group *vg; + struct net_bridge_vlan *vlan; + + ASSERT_RTNL(); + + vg = br_vlan_group(br); + if (vg) { + list_for_each_entry(vlan, &vg->vlan_list, vlist) { + struct net_bridge_mcast *brmctx; + + brmctx = &vlan->br_mcast_ctx; + if (br_vlan_is_brentry(vlan) && + !br_multicast_ctx_vlan_disabled(brmctx)) + __br_multicast_open(&vlan->br_mcast_ctx); + } + } + + __br_multicast_open(&br->multicast_ctx); +} + static void __br_multicast_stop(struct net_bridge_mcast *brmctx) { del_timer_sync(&brmctx->ip4_mc_router_timer); @@ -3729,8 +3759,70 @@ static void __br_multicast_stop(struct net_bridge_mcast *brmctx) #endif } +void br_multicast_toggle_one_vlan(struct net_bridge_vlan *vlan, bool on) +{ + struct net_bridge *br; + + /* it's okay to check for the flag without the multicast lock because it + * can only change under RTNL -> multicast_lock, we need the latter to + * sync with timers and packets + */ + if (on == !!(vlan->priv_flags & BR_VLFLAG_MCAST_ENABLED)) + return; + + if (br_vlan_is_master(vlan)) { + br = vlan->br; + + if (!br_vlan_is_brentry(vlan) || + (on && + br_multicast_ctx_vlan_global_disabled(&vlan->br_mcast_ctx))) + return; + + spin_lock_bh(&br->multicast_lock); + vlan->priv_flags ^= BR_VLFLAG_MCAST_ENABLED; + spin_unlock_bh(&br->multicast_lock); + + if (on) + __br_multicast_open(&vlan->br_mcast_ctx); + else + __br_multicast_stop(&vlan->br_mcast_ctx); + } else { + struct net_bridge_mcast *brmctx; + + brmctx = br_multicast_port_ctx_get_global(&vlan->port_mcast_ctx); + if (on && br_multicast_ctx_vlan_global_disabled(brmctx)) + return; + + br = vlan->port->br; + spin_lock_bh(&br->multicast_lock); + vlan->priv_flags ^= BR_VLFLAG_MCAST_ENABLED; + if (on) + __br_multicast_enable_port_ctx(&vlan->port_mcast_ctx); + else + __br_multicast_disable_port_ctx(&vlan->port_mcast_ctx); + spin_unlock_bh(&br->multicast_lock); + } +} + void br_multicast_stop(struct net_bridge *br) { + struct net_bridge_vlan_group *vg; + struct net_bridge_vlan *vlan; + + ASSERT_RTNL(); + + vg = br_vlan_group(br); + if (vg) { + list_for_each_entry(vlan, &vg->vlan_list, vlist) { + struct net_bridge_mcast *brmctx; + + brmctx = &vlan->br_mcast_ctx; + if (br_vlan_is_brentry(vlan) && + !br_multicast_ctx_vlan_disabled(brmctx)) + __br_multicast_stop(&vlan->br_mcast_ctx); + } + } + __br_multicast_stop(&br->multicast_ctx); } @@ -3876,7 +3968,7 @@ static void br_multicast_start_querier(struct net_bridge_mcast *brmctx, { struct net_bridge_port *port; - __br_multicast_open(brmctx->br, query); + __br_multicast_open_query(brmctx->br, query); rcu_read_lock(); list_for_each_entry_rcu(port, &brmctx->br->port_list, list) { diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h index 7a0d077c2c39..a27c8b8f6efd 100644 --- a/net/bridge/br_private.h +++ b/net/bridge/br_private.h @@ -151,6 +151,8 @@ struct br_tunnel_info { enum { BR_VLFLAG_PER_PORT_STATS = BIT(0), BR_VLFLAG_ADDED_BY_SWITCHDEV = BIT(1), + BR_VLFLAG_MCAST_ENABLED = BIT(2), + BR_VLFLAG_GLOBAL_MCAST_ENABLED = BIT(3), }; /** @@ -914,6 +916,7 @@ void br_multicast_port_ctx_init(struct net_bridge_port *port, struct net_bridge_vlan *vlan, struct net_bridge_mcast_port *pmctx); void br_multicast_port_ctx_deinit(struct net_bridge_mcast_port *pmctx); +void br_multicast_toggle_one_vlan(struct net_bridge_vlan *vlan, bool on); static inline bool br_group_is_l2(const struct br_ip *group) { @@ -1075,6 +1078,48 @@ static inline unsigned long br_multicast_gmi(const struct net_bridge_mcast *brmc return 2 * brmctx->multicast_query_interval + brmctx->multicast_query_response_interval; } + +static inline bool +br_multicast_ctx_is_vlan(const struct net_bridge_mcast *brmctx) +{ + return !!brmctx->vlan; +} + +static inline bool +br_multicast_port_ctx_is_vlan(const struct net_bridge_mcast_port *pmctx) +{ + return !!pmctx->vlan; +} + +static inline struct net_bridge_mcast * +br_multicast_port_ctx_get_global(const struct net_bridge_mcast_port *pmctx) +{ + if (!br_multicast_port_ctx_is_vlan(pmctx)) + return &pmctx->port->br->multicast_ctx; + else + return &pmctx->vlan->brvlan->br_mcast_ctx; +} + +static inline bool +br_multicast_ctx_vlan_global_disabled(const struct net_bridge_mcast *brmctx) +{ + return br_multicast_ctx_is_vlan(brmctx) && + !(brmctx->vlan->priv_flags & BR_VLFLAG_GLOBAL_MCAST_ENABLED); +} + +static inline bool +br_multicast_ctx_vlan_disabled(const struct net_bridge_mcast *brmctx) +{ + return br_multicast_ctx_is_vlan(brmctx) && + !(brmctx->vlan->priv_flags & BR_VLFLAG_MCAST_ENABLED); +} + +static inline bool +br_multicast_port_ctx_vlan_disabled(const struct net_bridge_mcast_port *pmctx) +{ + return br_multicast_port_ctx_is_vlan(pmctx) && + !(pmctx->vlan->priv_flags & BR_VLFLAG_MCAST_ENABLED); +} #else static inline int br_multicast_rcv(struct net_bridge_mcast *brmctx, struct net_bridge_mcast_port *pmctx, @@ -1208,6 +1253,11 @@ static inline void br_multicast_port_ctx_init(struct net_bridge_port *port, static inline void br_multicast_port_ctx_deinit(struct net_bridge_mcast_port *pmctx) { } + +static inline void br_multicast_toggle_one_vlan(struct net_bridge_vlan *vlan, + bool on) +{ +} #endif /* br_vlan.c */ diff --git a/net/bridge/br_vlan.c b/net/bridge/br_vlan.c index e7b7bb0a005b..1a8cb2b1b762 100644 --- a/net/bridge/br_vlan.c +++ b/net/bridge/br_vlan.c @@ -190,6 +190,7 @@ static void br_vlan_put_master(struct net_bridge_vlan *masterv) rhashtable_remove_fast(&vg->vlan_hash, &masterv->vnode, br_vlan_rht_params); __vlan_del_list(masterv); + br_multicast_toggle_one_vlan(masterv, false); br_multicast_ctx_deinit(&masterv->br_mcast_ctx); call_rcu(&masterv->rcu, br_master_vlan_rcu_free); } @@ -287,6 +288,7 @@ static int __vlan_add(struct net_bridge_vlan *v, u16 flags, if (err && err != -EOPNOTSUPP) goto out; br_multicast_ctx_init(br, v, &v->br_mcast_ctx); + v->priv_flags |= BR_VLFLAG_GLOBAL_MCAST_ENABLED; } /* Add the dev mac and count the vlan only if it's usable */ @@ -309,6 +311,7 @@ static int __vlan_add(struct net_bridge_vlan *v, u16 flags, __vlan_add_list(v); __vlan_add_flags(v, flags); + br_multicast_toggle_one_vlan(v, true); if (p) nbp_vlan_set_vlan_dev_state(p, v->vid); @@ -377,6 +380,7 @@ static int __vlan_del(struct net_bridge_vlan *v) br_vlan_rht_params); __vlan_del_list(v); nbp_vlan_set_vlan_dev_state(p, v->vid); + br_multicast_toggle_one_vlan(v, false); br_multicast_port_ctx_deinit(&v->port_mcast_ctx); call_rcu(&v->rcu, nbp_vlan_rcu_free); } From patchwork Mon Jul 19 17:06:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nikolay Aleksandrov X-Patchwork-Id: 12386397 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.9 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C198C64E74 for ; Mon, 19 Jul 2021 17:22:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4F27861029 for ; Mon, 19 Jul 2021 17:22:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1355754AbhGSQju (ORCPT ); Mon, 19 Jul 2021 12:39:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36050 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1355659AbhGSQgd (ORCPT ); Mon, 19 Jul 2021 12:36:33 -0400 Received: from mail-ej1-x630.google.com (mail-ej1-x630.google.com [IPv6:2a00:1450:4864:20::630]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 36161C04F978 for ; Mon, 19 Jul 2021 09:49:50 -0700 (PDT) Received: by mail-ej1-x630.google.com with SMTP id nd37so29931962ejc.3 for ; Mon, 19 Jul 2021 10:10:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=blackwall-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=78cdZKUIevFFmT6hNFCn9jC96St+3byWaSdnZ2rlJ0Q=; b=Wwtgf5YrZngX+cYE/39tApzMnkm/fmn+PUUrwuFphgi2Dwm0ceTmDHzi2bh3bv6GNF zYi14GE3rY0aW8+j+Q+jBP69putzhjfospK409JZaI3Uopwk53mTQnE4wmiwDCEN+CtM gL151tFATrTvP3JQzU6CsxJWTDyuoon5MX/QkHwIqIt5W8dVYpHBXkdJkKHw7b4vBlJn ridfKlou+Gcil5MR37R1PqrkQHSb7fdlS8mtzc+nnuG9RBaMnJV/iTQT/7nykPZg+M3A 59Z+iB99pO1Q/9wIeCoT5vmWOIBuSbPcP8H2/97CRf4ga1b6TAeHSZqRzrBlbmgnADLt QNIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=78cdZKUIevFFmT6hNFCn9jC96St+3byWaSdnZ2rlJ0Q=; b=N4tnfnugmvZKjok4/0LaDftCSCKTlmGii6kVNnLk7696K5uTUVMN1B8m1G/eBUSHaE DHmYuL7izSRJje5S2a1DglXL0hYkJ5W4dT5sO08fD7y3FzTgvRUhxfNxWmE2OX8McCJw U0XFh0MBTxWj/NCHhsPAXJaujA5zGhkkh8dbK2gOToMKXHuL+ItWzG6KZakVHZRpYpsy mI6pne7YpOeEf34uy7QNxvsDR3QjDKEJh++7yfhH8cWi7kuJa2IZALBZOvuQdgScdfuN GI6foaj97OpyNf7mSymRRfPmnIurmiRLnNA2gAnxgTsOAYIRzSeWgBMAzAez6QZnR+Af pZ/w== X-Gm-Message-State: AOAM5320DWA0JlKEiQjk0PvbJoCCA/3iITNi9lou1SY0pJRovYDtXc8T GU3kzn09Mav0DWj4foaKvSzT2nxwgoHSgNEZWT0= X-Google-Smtp-Source: ABdhPJyoz6qaD0KPLLh8/keYaihuJoGaXPntpMN+BQbQ7esXGtkliPM0CBRKs4g1VDNL2WjWRM7fQQ== X-Received: by 2002:a17:906:6050:: with SMTP id p16mr18811152ejj.43.1626714602410; Mon, 19 Jul 2021 10:10:02 -0700 (PDT) Received: from debil.vdiclient.nvidia.com (84-238-136-197.ip.btc-net.bg. [84.238.136.197]) by smtp.gmail.com with ESMTPSA id nc29sm6073896ejc.10.2021.07.19.10.10.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Jul 2021 10:10:01 -0700 (PDT) From: Nikolay Aleksandrov To: netdev@vger.kernel.org Cc: roopa@nvidia.com, bridge@lists.linux-foundation.org, Nikolay Aleksandrov Subject: [PATCH net-next 06/15] net: bridge: add vlan mcast snooping knob Date: Mon, 19 Jul 2021 20:06:28 +0300 Message-Id: <20210719170637.435541-7-razor@blackwall.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210719170637.435541-1-razor@blackwall.org> References: <20210719170637.435541-1-razor@blackwall.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Nikolay Aleksandrov Add a global knob that controls if vlan multicast snooping is enabled. The proper contexts (vlan or bridge-wide) will be chosen based on the knob when processing packets and changing bridge device state. Note that vlans have their individual mcast snooping enabled by default, but this knob is needed to turn on bridge vlan snooping. It is disabled by default. To enable the knob vlan filtering must also be enabled, it doesn't make sense to have vlan mcast snooping without vlan filtering since that would lead to inconsistencies. Disabling vlan filtering will also automatically disable vlan mcast snooping. Signed-off-by: Nikolay Aleksandrov --- include/uapi/linux/if_bridge.h | 2 + net/bridge/br.c | 9 ++- net/bridge/br_device.c | 7 +- net/bridge/br_input.c | 5 +- net/bridge/br_multicast.c | 143 ++++++++++++++++++++++++++------- net/bridge/br_private.h | 37 +++++++-- net/bridge/br_vlan.c | 20 +++-- 7 files changed, 175 insertions(+), 48 deletions(-) diff --git a/include/uapi/linux/if_bridge.h b/include/uapi/linux/if_bridge.h index d8156d1526ae..57a63a1572e0 100644 --- a/include/uapi/linux/if_bridge.h +++ b/include/uapi/linux/if_bridge.h @@ -720,12 +720,14 @@ struct br_mcast_stats { /* bridge boolean options * BR_BOOLOPT_NO_LL_LEARN - disable learning from link-local packets + * BR_BOOLOPT_MCAST_VLAN_SNOOPING - control vlan multicast snooping * * IMPORTANT: if adding a new option do not forget to handle * it in br_boolopt_toggle/get and bridge sysfs */ enum br_boolopt_id { BR_BOOLOPT_NO_LL_LEARN, + BR_BOOLOPT_MCAST_VLAN_SNOOPING, BR_BOOLOPT_MAX }; diff --git a/net/bridge/br.c b/net/bridge/br.c index ef743f94254d..51f2e25c4cd6 100644 --- a/net/bridge/br.c +++ b/net/bridge/br.c @@ -214,17 +214,22 @@ static struct notifier_block br_switchdev_notifier = { int br_boolopt_toggle(struct net_bridge *br, enum br_boolopt_id opt, bool on, struct netlink_ext_ack *extack) { + int err = 0; + switch (opt) { case BR_BOOLOPT_NO_LL_LEARN: br_opt_toggle(br, BROPT_NO_LL_LEARN, on); break; + case BR_BOOLOPT_MCAST_VLAN_SNOOPING: + err = br_multicast_toggle_vlan_snooping(br, on, extack); + break; default: /* shouldn't be called with unsupported options */ WARN_ON(1); break; } - return 0; + return err; } int br_boolopt_get(const struct net_bridge *br, enum br_boolopt_id opt) @@ -232,6 +237,8 @@ int br_boolopt_get(const struct net_bridge *br, enum br_boolopt_id opt) switch (opt) { case BR_BOOLOPT_NO_LL_LEARN: return br_opt_get(br, BROPT_NO_LL_LEARN); + case BR_BOOLOPT_MCAST_VLAN_SNOOPING: + return br_opt_get(br, BROPT_MCAST_VLAN_SNOOPING_ENABLED); default: /* shouldn't be called with unsupported options */ WARN_ON(1); diff --git a/net/bridge/br_device.c b/net/bridge/br_device.c index e815bf4f9f24..00daf35f54d5 100644 --- a/net/bridge/br_device.c +++ b/net/bridge/br_device.c @@ -27,12 +27,14 @@ EXPORT_SYMBOL_GPL(nf_br_ops); /* net device transmit always called with BH disabled */ netdev_tx_t br_dev_xmit(struct sk_buff *skb, struct net_device *dev) { + struct net_bridge_mcast_port *pmctx_null = NULL; struct net_bridge *br = netdev_priv(dev); struct net_bridge_mcast *brmctx = &br->multicast_ctx; struct net_bridge_fdb_entry *dst; struct net_bridge_mdb_entry *mdst; const struct nf_br_ops *nf_ops; u8 state = BR_STATE_FORWARDING; + struct net_bridge_vlan *vlan; const unsigned char *dest; u16 vid = 0; @@ -54,7 +56,8 @@ netdev_tx_t br_dev_xmit(struct sk_buff *skb, struct net_device *dev) skb_reset_mac_header(skb); skb_pull(skb, ETH_HLEN); - if (!br_allowed_ingress(br, br_vlan_group_rcu(br), skb, &vid, &state)) + if (!br_allowed_ingress(br, br_vlan_group_rcu(br), skb, &vid, + &state, &vlan)) goto out; if (IS_ENABLED(CONFIG_INET) && @@ -83,7 +86,7 @@ netdev_tx_t br_dev_xmit(struct sk_buff *skb, struct net_device *dev) br_flood(br, skb, BR_PKT_MULTICAST, false, true); goto out; } - if (br_multicast_rcv(brmctx, NULL, skb, vid)) { + if (br_multicast_rcv(&brmctx, &pmctx_null, vlan, skb, vid)) { kfree_skb(skb); goto out; } diff --git a/net/bridge/br_input.c b/net/bridge/br_input.c index bb2036dd4934..8a0c0cc55cb4 100644 --- a/net/bridge/br_input.c +++ b/net/bridge/br_input.c @@ -73,6 +73,7 @@ int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb struct net_bridge_mdb_entry *mdst; bool local_rcv, mcast_hit = false; struct net_bridge_mcast *brmctx; + struct net_bridge_vlan *vlan; struct net_bridge *br; u16 vid = 0; u8 state; @@ -84,7 +85,7 @@ int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb pmctx = &p->multicast_ctx; state = p->state; if (!br_allowed_ingress(p->br, nbp_vlan_group_rcu(p), skb, &vid, - &state)) + &state, &vlan)) goto out; nbp_switchdev_frame_mark(p, skb); @@ -102,7 +103,7 @@ int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb local_rcv = true; } else { pkt_type = BR_PKT_MULTICAST; - if (br_multicast_rcv(brmctx, pmctx, skb, vid)) + if (br_multicast_rcv(&brmctx, &pmctx, vlan, skb, vid)) goto drop; } } diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c index ef4e7de3f18d..b71772828b23 100644 --- a/net/bridge/br_multicast.c +++ b/net/bridge/br_multicast.c @@ -1797,9 +1797,9 @@ void br_multicast_enable_port(struct net_bridge_port *port) { struct net_bridge *br = port->br; - spin_lock(&br->multicast_lock); + spin_lock_bh(&br->multicast_lock); __br_multicast_enable_port_ctx(&port->multicast_ctx); - spin_unlock(&br->multicast_lock); + spin_unlock_bh(&br->multicast_lock); } static void __br_multicast_disable_port_ctx(struct net_bridge_mcast_port *pmctx) @@ -1827,9 +1827,9 @@ static void __br_multicast_disable_port_ctx(struct net_bridge_mcast_port *pmctx) void br_multicast_disable_port(struct net_bridge_port *port) { - spin_lock(&port->br->multicast_lock); + spin_lock_bh(&port->br->multicast_lock); __br_multicast_disable_port_ctx(&port->multicast_ctx); - spin_unlock(&port->br->multicast_lock); + spin_unlock_bh(&port->br->multicast_lock); } static int __grp_src_delete_marked(struct net_bridge_port_group *pg) @@ -3510,8 +3510,9 @@ static int br_multicast_ipv6_rcv(struct net_bridge_mcast *brmctx, } #endif -int br_multicast_rcv(struct net_bridge_mcast *brmctx, - struct net_bridge_mcast_port *pmctx, +int br_multicast_rcv(struct net_bridge_mcast **brmctx, + struct net_bridge_mcast_port **pmctx, + struct net_bridge_vlan *vlan, struct sk_buff *skb, u16 vid) { int ret = 0; @@ -3519,16 +3520,36 @@ int br_multicast_rcv(struct net_bridge_mcast *brmctx, BR_INPUT_SKB_CB(skb)->igmp = 0; BR_INPUT_SKB_CB(skb)->mrouters_only = 0; - if (!br_opt_get(brmctx->br, BROPT_MULTICAST_ENABLED)) + if (!br_opt_get((*brmctx)->br, BROPT_MULTICAST_ENABLED)) return 0; + if (br_opt_get((*brmctx)->br, BROPT_MCAST_VLAN_SNOOPING_ENABLED) && vlan) { + const struct net_bridge_vlan *masterv; + + /* the vlan has the master flag set only when transmitting + * through the bridge device + */ + if (br_vlan_is_master(vlan)) { + masterv = vlan; + *brmctx = &vlan->br_mcast_ctx; + *pmctx = NULL; + } else { + masterv = vlan->brvlan; + *brmctx = &vlan->brvlan->br_mcast_ctx; + *pmctx = &vlan->port_mcast_ctx; + } + + if (!(masterv->priv_flags & BR_VLFLAG_GLOBAL_MCAST_ENABLED)) + return 0; + } + switch (skb->protocol) { case htons(ETH_P_IP): - ret = br_multicast_ipv4_rcv(brmctx, pmctx, skb, vid); + ret = br_multicast_ipv4_rcv(*brmctx, *pmctx, skb, vid); break; #if IS_ENABLED(CONFIG_IPV6) case htons(ETH_P_IPV6): - ret = br_multicast_ipv6_rcv(brmctx, pmctx, skb, vid); + ret = br_multicast_ipv6_rcv(*brmctx, *pmctx, skb, vid); break; #endif } @@ -3727,20 +3748,22 @@ static void __br_multicast_open(struct net_bridge_mcast *brmctx) void br_multicast_open(struct net_bridge *br) { - struct net_bridge_vlan_group *vg; - struct net_bridge_vlan *vlan; - ASSERT_RTNL(); - vg = br_vlan_group(br); - if (vg) { - list_for_each_entry(vlan, &vg->vlan_list, vlist) { - struct net_bridge_mcast *brmctx; - - brmctx = &vlan->br_mcast_ctx; - if (br_vlan_is_brentry(vlan) && - !br_multicast_ctx_vlan_disabled(brmctx)) - __br_multicast_open(&vlan->br_mcast_ctx); + if (br_opt_get(br, BROPT_MCAST_VLAN_SNOOPING_ENABLED)) { + struct net_bridge_vlan_group *vg; + struct net_bridge_vlan *vlan; + + vg = br_vlan_group(br); + if (vg) { + list_for_each_entry(vlan, &vg->vlan_list, vlist) { + struct net_bridge_mcast *brmctx; + + brmctx = &vlan->br_mcast_ctx; + if (br_vlan_is_brentry(vlan) && + !br_multicast_ctx_vlan_disabled(brmctx)) + __br_multicast_open(&vlan->br_mcast_ctx); + } } } @@ -3804,22 +3827,80 @@ void br_multicast_toggle_one_vlan(struct net_bridge_vlan *vlan, bool on) } } -void br_multicast_stop(struct net_bridge *br) +void br_multicast_toggle_vlan(struct net_bridge_vlan *vlan, bool on) +{ + struct net_bridge_port *p; + + if (WARN_ON_ONCE(!br_vlan_is_master(vlan))) + return; + + list_for_each_entry(p, &vlan->br->port_list, list) { + struct net_bridge_vlan *vport; + + vport = br_vlan_find(nbp_vlan_group(p), vlan->vid); + if (!vport) + continue; + br_multicast_toggle_one_vlan(vport, on); + } +} + +int br_multicast_toggle_vlan_snooping(struct net_bridge *br, bool on, + struct netlink_ext_ack *extack) { struct net_bridge_vlan_group *vg; struct net_bridge_vlan *vlan; + struct net_bridge_port *p; - ASSERT_RTNL(); + if (br_opt_get(br, BROPT_MCAST_VLAN_SNOOPING_ENABLED) == on) + return 0; + + if (on && !br_opt_get(br, BROPT_VLAN_ENABLED)) { + NL_SET_ERR_MSG_MOD(extack, "Cannot enable multicast vlan snooping with vlan filtering disabled"); + return -EINVAL; + } vg = br_vlan_group(br); - if (vg) { - list_for_each_entry(vlan, &vg->vlan_list, vlist) { - struct net_bridge_mcast *brmctx; - - brmctx = &vlan->br_mcast_ctx; - if (br_vlan_is_brentry(vlan) && - !br_multicast_ctx_vlan_disabled(brmctx)) - __br_multicast_stop(&vlan->br_mcast_ctx); + if (!vg) + return 0; + + br_opt_toggle(br, BROPT_MCAST_VLAN_SNOOPING_ENABLED, on); + + /* disable/enable non-vlan mcast contexts based on vlan snooping */ + if (on) + __br_multicast_stop(&br->multicast_ctx); + else + __br_multicast_open(&br->multicast_ctx); + list_for_each_entry(p, &br->port_list, list) { + if (on) + br_multicast_disable_port(p); + else + br_multicast_enable_port(p); + } + + list_for_each_entry(vlan, &vg->vlan_list, vlist) + br_multicast_toggle_vlan(vlan, on); + + return 0; +} + +void br_multicast_stop(struct net_bridge *br) +{ + ASSERT_RTNL(); + + if (br_opt_get(br, BROPT_MCAST_VLAN_SNOOPING_ENABLED)) { + struct net_bridge_vlan_group *vg; + struct net_bridge_vlan *vlan; + + vg = br_vlan_group(br); + if (vg) { + list_for_each_entry(vlan, &vg->vlan_list, vlist) { + struct net_bridge_mcast *brmctx; + + brmctx = &vlan->br_mcast_ctx; + if (br_vlan_is_brentry(vlan) && + !br_multicast_ctx_vlan_disabled(brmctx)) + __br_multicast_stop(&vlan->br_mcast_ctx); + } } } diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h index a27c8b8f6efd..a643f6bf759f 100644 --- a/net/bridge/br_private.h +++ b/net/bridge/br_private.h @@ -441,6 +441,7 @@ enum net_bridge_opts { BROPT_VLAN_STATS_PER_PORT, BROPT_NO_LL_LEARN, BROPT_VLAN_BRIDGE_BINDING, + BROPT_MCAST_VLAN_SNOOPING_ENABLED, }; struct net_bridge { @@ -842,8 +843,9 @@ int br_ioctl_deviceless_stub(struct net *net, unsigned int cmd, /* br_multicast.c */ #ifdef CONFIG_BRIDGE_IGMP_SNOOPING -int br_multicast_rcv(struct net_bridge_mcast *brmctx, - struct net_bridge_mcast_port *pmctx, +int br_multicast_rcv(struct net_bridge_mcast **brmctx, + struct net_bridge_mcast_port **pmctx, + struct net_bridge_vlan *vlan, struct sk_buff *skb, u16 vid); struct net_bridge_mdb_entry *br_mdb_get(struct net_bridge_mcast *brmctx, struct sk_buff *skb, u16 vid); @@ -917,6 +919,9 @@ void br_multicast_port_ctx_init(struct net_bridge_port *port, struct net_bridge_mcast_port *pmctx); void br_multicast_port_ctx_deinit(struct net_bridge_mcast_port *pmctx); void br_multicast_toggle_one_vlan(struct net_bridge_vlan *vlan, bool on); +void br_multicast_toggle_vlan(struct net_bridge_vlan *vlan, bool on); +int br_multicast_toggle_vlan_snooping(struct net_bridge *br, bool on, + struct netlink_ext_ack *extack); static inline bool br_group_is_l2(const struct br_ip *group) { @@ -1103,7 +1108,8 @@ br_multicast_port_ctx_get_global(const struct net_bridge_mcast_port *pmctx) static inline bool br_multicast_ctx_vlan_global_disabled(const struct net_bridge_mcast *brmctx) { - return br_multicast_ctx_is_vlan(brmctx) && + return br_opt_get(brmctx->br, BROPT_MCAST_VLAN_SNOOPING_ENABLED) && + br_multicast_ctx_is_vlan(brmctx) && !(brmctx->vlan->priv_flags & BR_VLFLAG_GLOBAL_MCAST_ENABLED); } @@ -1121,8 +1127,9 @@ br_multicast_port_ctx_vlan_disabled(const struct net_bridge_mcast_port *pmctx) !(pmctx->vlan->priv_flags & BR_VLFLAG_MCAST_ENABLED); } #else -static inline int br_multicast_rcv(struct net_bridge_mcast *brmctx, - struct net_bridge_mcast_port *pmctx, +static inline int br_multicast_rcv(struct net_bridge_mcast **brmctx, + struct net_bridge_mcast_port **pmctx, + struct net_bridge_vlan *vlan, struct sk_buff *skb, u16 vid) { @@ -1258,13 +1265,26 @@ static inline void br_multicast_toggle_one_vlan(struct net_bridge_vlan *vlan, bool on) { } + +static inline void br_multicast_toggle_vlan(struct net_bridge_vlan *vlan, + bool on) +{ +} + +static inline int br_multicast_toggle_vlan_snooping(struct net_bridge *br, + bool on, + struct netlink_ext_ack *extack) +{ + return -EOPNOTSUPP; +} #endif /* br_vlan.c */ #ifdef CONFIG_BRIDGE_VLAN_FILTERING bool br_allowed_ingress(const struct net_bridge *br, struct net_bridge_vlan_group *vg, struct sk_buff *skb, - u16 *vid, u8 *state); + u16 *vid, u8 *state, + struct net_bridge_vlan **vlan); bool br_allowed_egress(struct net_bridge_vlan_group *vg, const struct sk_buff *skb); bool br_should_learn(struct net_bridge_port *p, struct sk_buff *skb, u16 *vid); @@ -1376,8 +1396,11 @@ static inline u16 br_vlan_flags(const struct net_bridge_vlan *v, u16 pvid) static inline bool br_allowed_ingress(const struct net_bridge *br, struct net_bridge_vlan_group *vg, struct sk_buff *skb, - u16 *vid, u8 *state) + u16 *vid, u8 *state, + struct net_bridge_vlan **vlan) + { + *vlan = NULL; return true; } diff --git a/net/bridge/br_vlan.c b/net/bridge/br_vlan.c index 1a8cb2b1b762..ab4969a4a380 100644 --- a/net/bridge/br_vlan.c +++ b/net/bridge/br_vlan.c @@ -481,7 +481,8 @@ struct sk_buff *br_handle_vlan(struct net_bridge *br, static bool __allowed_ingress(const struct net_bridge *br, struct net_bridge_vlan_group *vg, struct sk_buff *skb, u16 *vid, - u8 *state) + u8 *state, + struct net_bridge_vlan **vlan) { struct pcpu_sw_netstats *stats; struct net_bridge_vlan *v; @@ -546,8 +547,9 @@ static bool __allowed_ingress(const struct net_bridge *br, */ skb->vlan_tci |= pvid; - /* if stats are disabled we can avoid the lookup */ - if (!br_opt_get(br, BROPT_VLAN_STATS_ENABLED)) { + /* if snooping and stats are disabled we can avoid the lookup */ + if (!br_opt_get(br, BROPT_MCAST_VLAN_SNOOPING_ENABLED) && + !br_opt_get(br, BROPT_VLAN_STATS_ENABLED)) { if (*state == BR_STATE_FORWARDING) { *state = br_vlan_get_pvid_state(vg); return br_vlan_state_allowed(*state, true); @@ -574,6 +576,8 @@ static bool __allowed_ingress(const struct net_bridge *br, u64_stats_update_end(&stats->syncp); } + *vlan = v; + return true; drop: @@ -583,17 +587,19 @@ static bool __allowed_ingress(const struct net_bridge *br, bool br_allowed_ingress(const struct net_bridge *br, struct net_bridge_vlan_group *vg, struct sk_buff *skb, - u16 *vid, u8 *state) + u16 *vid, u8 *state, + struct net_bridge_vlan **vlan) { /* If VLAN filtering is disabled on the bridge, all packets are * permitted. */ + *vlan = NULL; if (!br_opt_get(br, BROPT_VLAN_ENABLED)) { BR_INPUT_SKB_CB(skb)->vlan_filtered = false; return true; } - return __allowed_ingress(br, vg, skb, vid, state); + return __allowed_ingress(br, vg, skb, vid, state, vlan); } /* Called under RCU. */ @@ -834,6 +840,10 @@ int br_vlan_filter_toggle(struct net_bridge *br, unsigned long val, br_manage_promisc(br); recalculate_group_addr(br); br_recalculate_fwd_mask(br); + if (!val && br_opt_get(br, BROPT_MCAST_VLAN_SNOOPING_ENABLED)) { + br_info(br, "vlan filtering disabled, automatically disabling multicast vlan snooping\n"); + br_multicast_toggle_vlan_snooping(br, false, NULL); + } return 0; } From patchwork Mon Jul 19 17:06:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nikolay Aleksandrov X-Patchwork-Id: 12386369 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 854B3C63799 for ; Mon, 19 Jul 2021 17:22:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 72BC861107 for ; Mon, 19 Jul 2021 17:22:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1355736AbhGSQjl (ORCPT ); Mon, 19 Jul 2021 12:39:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36202 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1355660AbhGSQgd (ORCPT ); Mon, 19 Jul 2021 12:36:33 -0400 Received: from mail-ej1-x634.google.com (mail-ej1-x634.google.com [IPv6:2a00:1450:4864:20::634]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 13377C04F97C for ; Mon, 19 Jul 2021 09:49:51 -0700 (PDT) Received: by mail-ej1-x634.google.com with SMTP id c17so29853334ejk.13 for ; Mon, 19 Jul 2021 10:10:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=blackwall-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=l2f8wSd2cFUSrpRoYeQ6oFo0IF2ypfibcU91DvCPZco=; b=wgmz7k/s54dbcXvrb+4rMBZQwTgHvK9EAWb8El5VRb4pA+H1C/njj2yPqKsVbkoDyq hb1QLfyM7pmQ4/LafPsodQpBu6ZS7Du86JZVHtcYDWCYpwXVHhlmE7augpY7WVW3XoNS uTZOVTmpp6BxA1p7n7amNMxqNBTlWYzSjr2y58XH4fpdUpZm9pjqmw5gumYK3EBmluLF nnbrqlvCU0ifK5hDRL9uT2tOQgx5vHbDIDUYqmzfOhGhQLPSGkl08MRIpQWQjlReK4P8 8x2/CXBwbe0rN/FV4c0WmDwtS38M2Ub6GE1JI6yQDoH1TX7SgTPwMnEP5617q8it6mBb C78g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=l2f8wSd2cFUSrpRoYeQ6oFo0IF2ypfibcU91DvCPZco=; b=D4pWSl9OPf+WkanYxhv1PYIrX58GX8LgtCQiiLQhcJARwkw4a+NVw0HZGNa3x0Urty 3pIN0fGpcupsgdDYVG0tuUmZlZNRFiKtudoIeF/cdwyj8QX8brSimlPpjILN+b2M/H+B 6t5P1hu2SihY3GjrbMtffJkSgEG/i5EEygKWsoCeZJpePOFvU4NKJrXtPXsbtP0ywQiK /R91op/9d7/Q9UIOkAlFUwMBAPwy5Oqu6G0mCzX+PPhQOOd3MoNS+B87YZDfdk9Gjhrn 3UCf4dVrVGXFGJnsAApAI2pcqx6j5S9bkltW8YKniq3cQ29aE91hWt919b9M/EVzZp4Q 6Vyg== X-Gm-Message-State: AOAM532pjttE5/JFmK4Dg63fmB/fv4xrgEPQ8xcZOvRwolphdaL3lKb9 5gGS/YN5sCCJ6C01cUmobKao8a95zWCWs2aCRDA= X-Google-Smtp-Source: ABdhPJzwhFCcOiRVEyJmDZY6cZkJypwCnf5hzFzo6BMWKhnD0MY2PWKu9ajB0VhtCuIfg9+R9lgD3g== X-Received: by 2002:a17:907:1c21:: with SMTP id nc33mr27486888ejc.436.1626714603306; Mon, 19 Jul 2021 10:10:03 -0700 (PDT) Received: from debil.vdiclient.nvidia.com (84-238-136-197.ip.btc-net.bg. [84.238.136.197]) by smtp.gmail.com with ESMTPSA id nc29sm6073896ejc.10.2021.07.19.10.10.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Jul 2021 10:10:02 -0700 (PDT) From: Nikolay Aleksandrov To: netdev@vger.kernel.org Cc: roopa@nvidia.com, bridge@lists.linux-foundation.org, Nikolay Aleksandrov Subject: [PATCH net-next 07/15] net: bridge: multicast: add helper to get port mcast context from port group Date: Mon, 19 Jul 2021 20:06:29 +0300 Message-Id: <20210719170637.435541-8-razor@blackwall.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210719170637.435541-1-razor@blackwall.org> References: <20210719170637.435541-1-razor@blackwall.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Nikolay Aleksandrov Add br_multicast_pg_to_port_ctx() which returns the proper port multicast context from either port or vlan based on bridge option and vlan flags. As the comment inside explains the locking is a bit tricky, we rely on the fact that BR_VLFLAG_MCAST_ENABLED requires multicast_lock to change and we also require it to be held to call that helper. If we find the vlan under rcu and it still has the flag then we can be sure it will be alive until we unlock multicast_lock which should be enough. Note that the context might change from vlan to bridge between different calls to this helper as the mcast vlan knob requires only rtnl so it should be used carefully and for read-only/check purposes. Signed-off-by: Nikolay Aleksandrov --- net/bridge/br_multicast.c | 38 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 38 insertions(+) diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c index b71772828b23..353406f2971a 100644 --- a/net/bridge/br_multicast.c +++ b/net/bridge/br_multicast.c @@ -192,6 +192,44 @@ struct net_bridge_mdb_entry *br_mdb_get(struct net_bridge_mcast *brmctx, return br_mdb_ip_get_rcu(br, &ip); } +/* IMPORTANT: this function must be used only when the contexts cannot be + * passed down (e.g. timer) and must be used for read-only purposes because + * the vlan snooping option can change, so it can return any context + * (non-vlan or vlan). Its initial intended purpose is to read timer values + * from the *current* context based on the option. At worst that could lead + * to inconsistent timers when the contexts are changed, i.e. src timer + * which needs to re-arm with a specific delay taken from the old context + */ +static struct net_bridge_mcast_port * +br_multicast_pg_to_port_ctx(const struct net_bridge_port_group *pg) +{ + struct net_bridge_mcast_port *pmctx = &pg->key.port->multicast_ctx; + struct net_bridge_vlan *vlan; + + lockdep_assert_held_once(&pg->key.port->br->multicast_lock); + + /* if vlan snooping is disabled use the port's multicast context */ + if (!pg->key.addr.vid || + !br_opt_get(pg->key.port->br, BROPT_MCAST_VLAN_SNOOPING_ENABLED)) + goto out; + + /* locking is tricky here, due to different rules for multicast and + * vlans we need to take rcu to find the vlan and make sure it has + * the BR_VLFLAG_MCAST_ENABLED flag set, it can only change under + * multicast_lock which must be already held here, so the vlan's pmctx + * can safely be used on return + */ + rcu_read_lock(); + vlan = br_vlan_find(nbp_vlan_group(pg->key.port), pg->key.addr.vid); + if (vlan && !br_multicast_port_ctx_vlan_disabled(&vlan->port_mcast_ctx)) + pmctx = &vlan->port_mcast_ctx; + else + pmctx = NULL; + rcu_read_unlock(); +out: + return pmctx; +} + static bool br_port_group_equal(struct net_bridge_port_group *p, struct net_bridge_port *port, const unsigned char *src) From patchwork Mon Jul 19 17:06:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nikolay Aleksandrov X-Patchwork-Id: 12386385 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C381C63798 for ; Mon, 19 Jul 2021 17:22:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6495E61107 for ; Mon, 19 Jul 2021 17:22:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1355720AbhGSQjh (ORCPT ); Mon, 19 Jul 2021 12:39:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36162 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1355664AbhGSQgd (ORCPT ); Mon, 19 Jul 2021 12:36:33 -0400 Received: from mail-ej1-x634.google.com (mail-ej1-x634.google.com [IPv6:2a00:1450:4864:20::634]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 31D69C078800 for ; Mon, 19 Jul 2021 09:49:52 -0700 (PDT) Received: by mail-ej1-x634.google.com with SMTP id hr1so29953260ejc.1 for ; Mon, 19 Jul 2021 10:10:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=blackwall-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=fZz7pbaMFBXMMthY41Q8WXcS+BZF9NZr318RJ38DIsk=; b=yhsXLFla7f+RYUqMuSi67fHFv/I/NIox181f3t4PaOeMA4hR2IjEUrCjfyqH+WJT5M sllGYxNIOCqfaoB7fIO5dpmFHq/xrYip6StSRwUOtHXpBT7Q19ltG1H30GoU/QvgG7No xVtjXvMltMkxnyno9Tz3nuHHoA6tY7bJXGC0KLuxSVKvTL39DF8qFf4YOraieqIAqfez SAPhkbl/j4hKfhl/Zy3Px74M2pp6csBi35ShRGS40tveH9BXkkzS2rjNF5I8I2Jj853D 0It0t90cFpvnlJX53YNLoWZIrFJw0m4TKax4IlHV/AKhtyl1kteizoNdnqjj9e91m2un CxXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fZz7pbaMFBXMMthY41Q8WXcS+BZF9NZr318RJ38DIsk=; b=VPqq4D+qubYT71+zDvoWvNuVlcy7F033EItAHmDpM4bF2rH9d7EbmaUGaDK4hejc0i nZMswc0tJy4Xdzq8GoGcGWnUwQkkWasAdVtZFCz2qugPrf5WfWrP/q15NKpLjBE8dfcX myuh2pWa6KYusQjO670gT09bNrPbIUNG/Z3mGRYtBqG/TwzFUxPCZcAL237Jy35KYH35 5RcQlTV+HRlaYnagSD7FCHQUHb0wGxYelchTc80HP2+vmDPMZaVsEluaVBOIDddG4bj/ 5xU9FXfqNIXQ8G9SEVwheAiy6QSMoA57Z+UuVhdLR8psrv+csR+Fcr1u1HM7+3DPsJft vapw== X-Gm-Message-State: AOAM533qr4dbI3BjyAn3sHZm6oWBQ/WH9cePGxczR0NpZW7N9bpqguNJ 5RsGRrkAstmQ+gbXpfVL4iWZ9CIhjIyk3S8jFGM= X-Google-Smtp-Source: ABdhPJwlPD9prqAhy84rIzgV4ClGf/UkQez1Xn5Ho2M/obGixL/BaeOQG2pyEYpj1y2PIK2FVZXiFw== X-Received: by 2002:a17:906:6814:: with SMTP id k20mr28028206ejr.381.1626714604410; Mon, 19 Jul 2021 10:10:04 -0700 (PDT) Received: from debil.vdiclient.nvidia.com (84-238-136-197.ip.btc-net.bg. [84.238.136.197]) by smtp.gmail.com with ESMTPSA id nc29sm6073896ejc.10.2021.07.19.10.10.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Jul 2021 10:10:03 -0700 (PDT) From: Nikolay Aleksandrov To: netdev@vger.kernel.org Cc: roopa@nvidia.com, bridge@lists.linux-foundation.org, Nikolay Aleksandrov Subject: [PATCH net-next 08/15] net: bridge: multicast: use the port group to port context helper Date: Mon, 19 Jul 2021 20:06:30 +0300 Message-Id: <20210719170637.435541-9-razor@blackwall.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210719170637.435541-1-razor@blackwall.org> References: <20210719170637.435541-1-razor@blackwall.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Nikolay Aleksandrov We need to use the new port group to port context helper in places where we cannot pass down the proper context (i.e. functions that can be called by timers or outside the packet snooping paths). Signed-off-by: Nikolay Aleksandrov --- net/bridge/br_multicast.c | 21 +++++++++++++++------ 1 file changed, 15 insertions(+), 6 deletions(-) diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c index 353406f2971a..e61e23c0ce17 100644 --- a/net/bridge/br_multicast.c +++ b/net/bridge/br_multicast.c @@ -309,7 +309,9 @@ void br_multicast_star_g_handle_mode(struct net_bridge_port_group *pg, mp = br_mdb_ip_get(br, &pg->key.addr); if (!mp) return; - pmctx = &pg->key.port->multicast_ctx; + pmctx = br_multicast_pg_to_port_ctx(pg); + if (!pmctx) + return; memset(&sg_ip, 0, sizeof(sg_ip)); sg_ip = pg->key.addr; @@ -435,7 +437,6 @@ void br_multicast_sg_add_exclude_ports(struct net_bridge_mdb_entry *star_mp, br_multicast_sg_host_state(star_mp, sg); memset(&sg_key, 0, sizeof(sg_key)); sg_key.addr = sg->key.addr; - brmctx = &br->multicast_ctx; /* we need to add all exclude ports to the S,G */ for (pg = mlock_dereference(star_mp->ports, br); pg; @@ -449,7 +450,11 @@ void br_multicast_sg_add_exclude_ports(struct net_bridge_mdb_entry *star_mp, if (br_sg_port_find(br, &sg_key)) continue; - pmctx = &pg->key.port->multicast_ctx; + pmctx = br_multicast_pg_to_port_ctx(pg); + if (!pmctx) + continue; + brmctx = br_multicast_port_ctx_get_global(pmctx); + src_pg = __br_multicast_add_group(brmctx, pmctx, &sg->key.addr, sg->eth_addr, @@ -473,7 +478,9 @@ static void br_multicast_fwd_src_add(struct net_bridge_group_src *src) return; memset(&sg_ip, 0, sizeof(sg_ip)); - pmctx = &src->pg->key.port->multicast_ctx; + pmctx = br_multicast_pg_to_port_ctx(src->pg); + if (!pmctx) + return; brmctx = br_multicast_port_ctx_get_global(pmctx); sg_ip = src->pg->key.addr; sg_ip.src = src->addr.src; @@ -1696,8 +1703,10 @@ static void br_multicast_port_group_rexmit(struct timer_list *t) !br_opt_get(br, BROPT_MULTICAST_QUERIER)) goto out; - brmctx = &br->multicast_ctx; - pmctx = &pg->key.port->multicast_ctx; + pmctx = br_multicast_pg_to_port_ctx(pg); + if (!pmctx) + goto out; + brmctx = br_multicast_port_ctx_get_global(pmctx); if (pg->key.addr.proto == htons(ETH_P_IP)) other_query = &brmctx->ip4_other_query; #if IS_ENABLED(CONFIG_IPV6) From patchwork Mon Jul 19 17:06:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nikolay Aleksandrov X-Patchwork-Id: 12386373 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9E048C6379F for ; Mon, 19 Jul 2021 17:22:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 954656113B for ; Mon, 19 Jul 2021 17:22:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1355789AbhGSQjy (ORCPT ); Mon, 19 Jul 2021 12:39:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36236 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343907AbhGSQgf (ORCPT ); Mon, 19 Jul 2021 12:36:35 -0400 Received: from mail-ed1-x52a.google.com (mail-ed1-x52a.google.com [IPv6:2a00:1450:4864:20::52a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2BC5AC078805 for ; Mon, 19 Jul 2021 09:49:53 -0700 (PDT) Received: by mail-ed1-x52a.google.com with SMTP id w14so24933245edc.8 for ; Mon, 19 Jul 2021 10:10:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=blackwall-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=aa9EkDLLjEgvNaQ45dS4vjblrco25Edm0gQrhh+rucs=; b=s/J9OFUiMDUavPy3lbiT2XjQCbhSDyUpClVHzKIr8FtCmsNIASHLNE7DVNv2Hdxzco A3Ydvyk9ubkrCHLCw7rXAiGMJ+iY4IBwtNNik5Df6Fij6xb9IbtS5bn1dy8zHeF7INpr x9sfT0do+WONHhnXg6Z5stftCKRpNtmjzUiRAoluGZYAJk0seVacWo5iz0qmNUEZOXxb i6sIm27wp/T/yPi+tGzykiIVMGSKcWWum6x3VXy1Fl3Wy+jA0YIvwh9Yd6HoujunDIS0 TaBneWVPa6tDOabZqmOOfPZHZvq+A3jwVxHASW87s9lMz44mcHme0gkYr5JwchmEwK/B d+yQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=aa9EkDLLjEgvNaQ45dS4vjblrco25Edm0gQrhh+rucs=; b=OawjAxpFQ1JM7R8yFcAB7cLRxc+h0aEilDWWeCLd350TzeyICUiBOHmf/P+3kP7mu3 XcoMQ3AZfxNph5cWeuu7PUhyBpu2Eg9DkRJKMBJnRLE0yQWev+44kSPHdtHV6Ntoc8wj r4469KuFZQFZ7ogVo/UcouSB/3M9KNFhlcIG3j0Zm/8QkzIr69kD4k5K25IjL4p8Is0M X/GxsWODgV5SFvPnQJnwRHRB1y4lUoZ2Z+FSutJndk35SWYktponsMPRygk5NuUMgYHs WAYAfhbV710i7zvgtkBErNnUG7nutSB1xZcHBIQJJ4bAuBUv5QpSGPJ4r954je3PNc9N V5uQ== X-Gm-Message-State: AOAM533k/Ix59iNXC9vfcA3x1olZ7gUS1AQFmILMZPfbNtzFFIjilCN0 AVcUCDILuKEkD1c4CySNHtLLCkwp5Y+tgN71jk0= X-Google-Smtp-Source: ABdhPJyJP5xO+t87FDVxZYV8QlW8EwgLWCdwKAvfpuLiSA5jvw0fj3dzP7WKrcZWzoeKcIkABjCoIw== X-Received: by 2002:a50:ce45:: with SMTP id k5mr16565860edj.168.1626714605335; Mon, 19 Jul 2021 10:10:05 -0700 (PDT) Received: from debil.vdiclient.nvidia.com (84-238-136-197.ip.btc-net.bg. [84.238.136.197]) by smtp.gmail.com with ESMTPSA id nc29sm6073896ejc.10.2021.07.19.10.10.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Jul 2021 10:10:04 -0700 (PDT) From: Nikolay Aleksandrov To: netdev@vger.kernel.org Cc: roopa@nvidia.com, bridge@lists.linux-foundation.org, Nikolay Aleksandrov Subject: [PATCH net-next 09/15] net: bridge: multicast: check if should use vlan mcast ctx Date: Mon, 19 Jul 2021 20:06:31 +0300 Message-Id: <20210719170637.435541-10-razor@blackwall.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210719170637.435541-1-razor@blackwall.org> References: <20210719170637.435541-1-razor@blackwall.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Nikolay Aleksandrov Add helpers which check if the current bridge/port multicast context should be used (i.e. they're not disabled) and use them for Rx IGMP/MLD processing, timers and new group addition. It is important for vlans to disable processing of timer/packet after the multicast_lock is obtained if the vlan context doesn't have BR_VLFLAG_MCAST_ENABLED. There are two cases when that flag is missing: - if the vlan is getting destroyed it will be removed and timers will be stopped - if the vlan mcast snooping is being disabled Signed-off-by: Nikolay Aleksandrov --- net/bridge/br_multicast.c | 59 +++++++++++++++++++++++++++++---------- net/bridge/br_private.h | 18 ++++++++++++ 2 files changed, 62 insertions(+), 15 deletions(-) diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c index e61e23c0ce17..4620946ec7d7 100644 --- a/net/bridge/br_multicast.c +++ b/net/bridge/br_multicast.c @@ -147,7 +147,8 @@ struct net_bridge_mdb_entry *br_mdb_get(struct net_bridge_mcast *brmctx, struct net_bridge *br = brmctx->br; struct br_ip ip; - if (!br_opt_get(br, BROPT_MULTICAST_ENABLED)) + if (!br_opt_get(br, BROPT_MULTICAST_ENABLED) || + br_multicast_ctx_vlan_global_disabled(brmctx)) return NULL; if (BR_INPUT_SKB_CB(skb)->igmp) @@ -230,6 +231,24 @@ br_multicast_pg_to_port_ctx(const struct net_bridge_port_group *pg) return pmctx; } +/* when snooping we need to check if the contexts should be used + * in the following order: + * - if pmctx is non-NULL (port), check if it should be used + * - if pmctx is NULL (bridge), check if brmctx should be used + */ +static bool +br_multicast_ctx_should_use(const struct net_bridge_mcast *brmctx, + const struct net_bridge_mcast_port *pmctx) +{ + if (!netif_running(brmctx->br->dev)) + return false; + + if (pmctx) + return !br_multicast_port_ctx_state_disabled(pmctx); + else + return !br_multicast_ctx_vlan_disabled(brmctx); +} + static bool br_port_group_equal(struct net_bridge_port_group *p, struct net_bridge_port *port, const unsigned char *src) @@ -1311,8 +1330,7 @@ __br_multicast_add_group(struct net_bridge_mcast *brmctx, struct net_bridge_mdb_entry *mp; unsigned long now = jiffies; - if (!netif_running(brmctx->br->dev) || - (pmctx && pmctx->port->state == BR_STATE_DISABLED)) + if (!br_multicast_ctx_should_use(brmctx, pmctx)) goto out; mp = br_multicast_new_group(brmctx->br, group); @@ -1532,6 +1550,7 @@ static void br_multicast_querier_expired(struct net_bridge_mcast *brmctx, { spin_lock(&brmctx->br->multicast_lock); if (!netif_running(brmctx->br->dev) || + br_multicast_ctx_vlan_global_disabled(brmctx) || !br_opt_get(brmctx->br, BROPT_MULTICAST_ENABLED)) goto out; @@ -1619,7 +1638,7 @@ static void br_multicast_send_query(struct net_bridge_mcast *brmctx, struct br_ip br_group; unsigned long time; - if (!netif_running(brmctx->br->dev) || + if (!br_multicast_ctx_should_use(brmctx, pmctx) || !br_opt_get(brmctx->br, BROPT_MULTICAST_ENABLED) || !br_opt_get(brmctx->br, BROPT_MULTICAST_QUERIER)) return; @@ -1655,16 +1674,16 @@ br_multicast_port_query_expired(struct net_bridge_mcast_port *pmctx, struct bridge_mcast_own_query *query) { struct net_bridge *br = pmctx->port->br; + struct net_bridge_mcast *brmctx; spin_lock(&br->multicast_lock); - if (pmctx->port->state == BR_STATE_DISABLED || - pmctx->port->state == BR_STATE_BLOCKING) + if (br_multicast_port_ctx_state_stopped(pmctx)) goto out; - - if (query->startup_sent < br->multicast_ctx.multicast_startup_query_count) + brmctx = br_multicast_port_ctx_get_global(pmctx); + if (query->startup_sent < brmctx->multicast_startup_query_count) query->startup_sent++; - br_multicast_send_query(&br->multicast_ctx, pmctx, query); + br_multicast_send_query(brmctx, pmctx, query); out: spin_unlock(&br->multicast_lock); @@ -2582,6 +2601,9 @@ static int br_ip4_multicast_igmp3_report(struct net_bridge_mcast *brmctx, continue; spin_lock_bh(&brmctx->br->multicast_lock); + if (!br_multicast_ctx_should_use(brmctx, pmctx)) + goto unlock_continue; + mdst = br_mdb_ip4_get(brmctx->br, group, vid); if (!mdst) goto unlock_continue; @@ -2717,6 +2739,9 @@ static int br_ip6_multicast_mld2_report(struct net_bridge_mcast *brmctx, continue; spin_lock_bh(&brmctx->br->multicast_lock); + if (!br_multicast_ctx_should_use(brmctx, pmctx)) + goto unlock_continue; + mdst = br_mdb_ip6_get(brmctx->br, &grec->grec_mca, vid); if (!mdst) goto unlock_continue; @@ -2962,6 +2987,9 @@ static void br_multicast_mark_router(struct net_bridge_mcast *brmctx, { unsigned long now = jiffies; + if (!br_multicast_ctx_should_use(brmctx, pmctx)) + return; + if (!pmctx) { if (brmctx->multicast_router == MDB_RTR_TYPE_TEMP_QUERY) { if (!br_ip4_multicast_is_router(brmctx) && @@ -3060,8 +3088,7 @@ static void br_ip4_multicast_query(struct net_bridge_mcast *brmctx, __be32 group; spin_lock(&brmctx->br->multicast_lock); - if (!netif_running(brmctx->br->dev) || - (pmctx && pmctx->port->state == BR_STATE_DISABLED)) + if (!br_multicast_ctx_should_use(brmctx, pmctx)) goto out; group = ih->group; @@ -3144,8 +3171,7 @@ static int br_ip6_multicast_query(struct net_bridge_mcast *brmctx, int err = 0; spin_lock(&brmctx->br->multicast_lock); - if (!netif_running(brmctx->br->dev) || - (pmctx && pmctx->port->state == BR_STATE_DISABLED)) + if (!br_multicast_ctx_should_use(brmctx, pmctx)) goto out; if (transport_len == sizeof(*mld)) { @@ -3229,8 +3255,7 @@ br_multicast_leave_group(struct net_bridge_mcast *brmctx, unsigned long time; spin_lock(&brmctx->br->multicast_lock); - if (!netif_running(brmctx->br->dev) || - (pmctx && pmctx->port->state == BR_STATE_DISABLED)) + if (!br_multicast_ctx_should_use(brmctx, pmctx)) goto out; mp = br_mdb_ip_get(brmctx->br, group); @@ -3609,11 +3634,15 @@ static void br_multicast_query_expired(struct net_bridge_mcast *brmctx, struct bridge_mcast_querier *querier) { spin_lock(&brmctx->br->multicast_lock); + if (br_multicast_ctx_vlan_disabled(brmctx)) + goto out; + if (query->startup_sent < brmctx->multicast_startup_query_count) query->startup_sent++; RCU_INIT_POINTER(querier->port, NULL); br_multicast_send_query(brmctx, NULL, query); +out: spin_unlock(&brmctx->br->multicast_lock); } diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h index a643f6bf759f..00b93fcc7870 100644 --- a/net/bridge/br_private.h +++ b/net/bridge/br_private.h @@ -1126,6 +1126,24 @@ br_multicast_port_ctx_vlan_disabled(const struct net_bridge_mcast_port *pmctx) return br_multicast_port_ctx_is_vlan(pmctx) && !(pmctx->vlan->priv_flags & BR_VLFLAG_MCAST_ENABLED); } + +static inline bool +br_multicast_port_ctx_state_disabled(const struct net_bridge_mcast_port *pmctx) +{ + return pmctx->port->state == BR_STATE_DISABLED || + (br_multicast_port_ctx_is_vlan(pmctx) && + (br_multicast_port_ctx_vlan_disabled(pmctx) || + pmctx->vlan->state == BR_STATE_DISABLED)); +} + +static inline bool +br_multicast_port_ctx_state_stopped(const struct net_bridge_mcast_port *pmctx) +{ + return br_multicast_port_ctx_state_disabled(pmctx) || + pmctx->port->state == BR_STATE_BLOCKING || + (br_multicast_port_ctx_is_vlan(pmctx) && + pmctx->vlan->state == BR_STATE_BLOCKING); +} #else static inline int br_multicast_rcv(struct net_bridge_mcast **brmctx, struct net_bridge_mcast_port **pmctx, From patchwork Mon Jul 19 17:06:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nikolay Aleksandrov X-Patchwork-Id: 12386371 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F7AEC6379B for ; Mon, 19 Jul 2021 17:22:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7C9796113C for ; Mon, 19 Jul 2021 17:22:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1355770AbhGSQjw (ORCPT ); Mon, 19 Jul 2021 12:39:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35306 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1355677AbhGSQge (ORCPT ); Mon, 19 Jul 2021 12:36:34 -0400 Received: from mail-ej1-x634.google.com (mail-ej1-x634.google.com [IPv6:2a00:1450:4864:20::634]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2D737C078807 for ; Mon, 19 Jul 2021 09:49:54 -0700 (PDT) Received: by mail-ej1-x634.google.com with SMTP id qa36so8541628ejc.10 for ; Mon, 19 Jul 2021 10:10:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=blackwall-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FB7z8k/P/2xrVSLTebtzWJZe24gB6OgBCQ9IKuHeTdQ=; b=1eOKeSvNmWJpOonKmqwgF0o6//+djWs/KaiQLgUXOGmLvJ41NYoEowIGXmtRifAulq hj7aoWP4DFVU2ZMRp7Zgz3d9N6m57AyJRvzHIC/Fo46417Lzcuy28L7dN+W1lXtROx5c RqF5XhTbgWPakrlm/D/uC/3s2ncEHv+4KeiIXp+6ppWfPjGvsI/RL5YizZv4f0qgol74 VhDGstaycL2WXEqv6t3NXoVhg3imIto1DgcBVHNjJa93n+mFlkBkMOH4krd1Viaek2gG 0IJV5iTKKVH/bFbh7dOEUa33oXk+hRWpNwFd02CzGIA72kb2limj9uyyfhVl7EVqb/3M 1phg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FB7z8k/P/2xrVSLTebtzWJZe24gB6OgBCQ9IKuHeTdQ=; b=RBQs7EOo2fzEZkzmOiGzQxswEKPFTSKenED9pbR5sCr1SbMA+LpWeUWic6aLDSgGpz LlTnm8HpbOdXaePfAWgL+bwJMyJRBJpmVgJlRJMHQ0GrqAx+N4FkD/FJ5tO+Axknd4Yv vymKoXYrNJH4gakl4ED/UqiIyNXa5/WsPTisI0avku7y/P3GXR5x7LyewjX9Y13/Qstb H8zKW7b8n2/bWJZX1cGYKf5Q6ClWbsO9R2adEpJNaA/kNMc5RnWIQGmDe9zf6vU+S90/ b7gJntZkQrMX9PHntsIfQN8xm3UuhEQD+PWgZxmSM1VDf/e3+mH4SusDANhz+zZFdZfg 2HPw== X-Gm-Message-State: AOAM531ELVGTCizdtj9r1bxGCMXW6ZNxGxnDDYa9W+j0n/gMFE8Z2jsy MLuq2MNz2Q9ENXL9JadRMZBJVUlWNUBrjIHy7Kk= X-Google-Smtp-Source: ABdhPJy52QEAmfoTm6M5kQc3jmUAMliTOcnOomUfqdkZxMs16Ldh50WbQjN2vNHTAlmGONJfSQ+aiQ== X-Received: by 2002:a17:906:384c:: with SMTP id w12mr28295880ejc.445.1626714606224; Mon, 19 Jul 2021 10:10:06 -0700 (PDT) Received: from debil.vdiclient.nvidia.com (84-238-136-197.ip.btc-net.bg. [84.238.136.197]) by smtp.gmail.com with ESMTPSA id nc29sm6073896ejc.10.2021.07.19.10.10.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Jul 2021 10:10:05 -0700 (PDT) From: Nikolay Aleksandrov To: netdev@vger.kernel.org Cc: roopa@nvidia.com, bridge@lists.linux-foundation.org, Nikolay Aleksandrov Subject: [PATCH net-next 10/15] net: bridge: multicast: add vlan querier and query support Date: Mon, 19 Jul 2021 20:06:32 +0300 Message-Id: <20210719170637.435541-11-razor@blackwall.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210719170637.435541-1-razor@blackwall.org> References: <20210719170637.435541-1-razor@blackwall.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Nikolay Aleksandrov Add basic vlan context querier support, if the contexts passed to multicast_alloc_query are vlan then the query will be tagged. Also handle querier start/stop of vlan contexts. Signed-off-by: Nikolay Aleksandrov --- net/bridge/br_multicast.c | 68 ++++++++++++++++++++++++++++++++++----- 1 file changed, 60 insertions(+), 8 deletions(-) diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c index 4620946ec7d7..9d4a18a711e4 100644 --- a/net/bridge/br_multicast.c +++ b/net/bridge/br_multicast.c @@ -773,7 +773,28 @@ static void br_multicast_gc(struct hlist_head *head) } } +static void __br_multicast_query_handle_vlan(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, + struct sk_buff *skb) +{ + struct net_bridge_vlan *vlan = NULL; + + if (pmctx && br_multicast_port_ctx_is_vlan(pmctx)) + vlan = pmctx->vlan; + else if (br_multicast_ctx_is_vlan(brmctx)) + vlan = brmctx->vlan; + + if (vlan && !(vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED)) { + u16 vlan_proto; + + if (br_vlan_get_proto(brmctx->br->dev, &vlan_proto) != 0) + return; + __vlan_hwaccel_put_tag(skb, htons(vlan_proto), vlan->vid); + } +} + static struct sk_buff *br_ip4_multicast_alloc_query(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, struct net_bridge_port_group *pg, __be32 ip_dst, __be32 group, bool with_srcs, bool over_lmqt, @@ -822,6 +843,7 @@ static struct sk_buff *br_ip4_multicast_alloc_query(struct net_bridge_mcast *brm if (!skb) goto out; + __br_multicast_query_handle_vlan(brmctx, pmctx, skb); skb->protocol = htons(ETH_P_IP); skb_reset_mac_header(skb); @@ -919,6 +941,7 @@ static struct sk_buff *br_ip4_multicast_alloc_query(struct net_bridge_mcast *brm #if IS_ENABLED(CONFIG_IPV6) static struct sk_buff *br_ip6_multicast_alloc_query(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, struct net_bridge_port_group *pg, const struct in6_addr *ip6_dst, const struct in6_addr *group, @@ -970,6 +993,7 @@ static struct sk_buff *br_ip6_multicast_alloc_query(struct net_bridge_mcast *brm if (!skb) goto out; + __br_multicast_query_handle_vlan(brmctx, pmctx, skb); skb->protocol = htons(ETH_P_IPV6); /* Ethernet header */ @@ -1082,6 +1106,7 @@ static struct sk_buff *br_ip6_multicast_alloc_query(struct net_bridge_mcast *brm #endif static struct sk_buff *br_multicast_alloc_query(struct net_bridge_mcast *brmctx, + struct net_bridge_mcast_port *pmctx, struct net_bridge_port_group *pg, struct br_ip *ip_dst, struct br_ip *group, @@ -1094,7 +1119,7 @@ static struct sk_buff *br_multicast_alloc_query(struct net_bridge_mcast *brmctx, switch (group->proto) { case htons(ETH_P_IP): ip4_dst = ip_dst ? ip_dst->dst.ip4 : htonl(INADDR_ALLHOSTS_GROUP); - return br_ip4_multicast_alloc_query(brmctx, pg, + return br_ip4_multicast_alloc_query(brmctx, pmctx, pg, ip4_dst, group->dst.ip4, with_srcs, over_lmqt, sflag, igmp_type, @@ -1109,7 +1134,7 @@ static struct sk_buff *br_multicast_alloc_query(struct net_bridge_mcast *brmctx, ipv6_addr_set(&ip6_dst, htonl(0xff020000), 0, 0, htonl(1)); - return br_ip6_multicast_alloc_query(brmctx, pg, + return br_ip6_multicast_alloc_query(brmctx, pmctx, pg, &ip6_dst, &group->dst.ip6, with_srcs, over_lmqt, sflag, igmp_type, @@ -1603,9 +1628,12 @@ static void __br_multicast_send_query(struct net_bridge_mcast *brmctx, struct sk_buff *skb; u8 igmp_type; + if (!br_multicast_ctx_should_use(brmctx, pmctx)) + return; + again_under_lmqt: - skb = br_multicast_alloc_query(brmctx, pg, ip_dst, group, with_srcs, - over_lmqt, sflag, &igmp_type, + skb = br_multicast_alloc_query(brmctx, pmctx, pg, ip_dst, group, + with_srcs, over_lmqt, sflag, &igmp_type, need_rexmit); if (!skb) return; @@ -1679,6 +1707,7 @@ br_multicast_port_query_expired(struct net_bridge_mcast_port *pmctx, spin_lock(&br->multicast_lock); if (br_multicast_port_ctx_state_stopped(pmctx)) goto out; + brmctx = br_multicast_port_ctx_get_global(pmctx); if (query->startup_sent < brmctx->multicast_startup_query_count) query->startup_sent++; @@ -4129,15 +4158,38 @@ static void br_multicast_start_querier(struct net_bridge_mcast *brmctx, rcu_read_lock(); list_for_each_entry_rcu(port, &brmctx->br->port_list, list) { - if (port->state == BR_STATE_DISABLED || - port->state == BR_STATE_BLOCKING) + struct bridge_mcast_own_query *ip4_own_query; +#if IS_ENABLED(CONFIG_IPV6) + struct bridge_mcast_own_query *ip6_own_query; +#endif + + if (br_multicast_port_ctx_state_stopped(&port->multicast_ctx)) continue; + if (br_multicast_ctx_is_vlan(brmctx)) { + struct net_bridge_vlan *vlan; + + vlan = br_vlan_find(nbp_vlan_group(port), brmctx->vlan->vid); + if (!vlan || + br_multicast_port_ctx_state_stopped(&vlan->port_mcast_ctx)) + continue; + + ip4_own_query = &vlan->port_mcast_ctx.ip4_own_query; +#if IS_ENABLED(CONFIG_IPV6) + ip6_own_query = &vlan->port_mcast_ctx.ip6_own_query; +#endif + } else { + ip4_own_query = &port->multicast_ctx.ip4_own_query; +#if IS_ENABLED(CONFIG_IPV6) + ip6_own_query = &port->multicast_ctx.ip6_own_query; +#endif + } + if (query == &brmctx->ip4_own_query) - br_multicast_enable(&port->multicast_ctx.ip4_own_query); + br_multicast_enable(ip4_own_query); #if IS_ENABLED(CONFIG_IPV6) else - br_multicast_enable(&port->multicast_ctx.ip6_own_query); + br_multicast_enable(ip6_own_query); #endif } rcu_read_unlock(); From patchwork Mon Jul 19 17:06:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nikolay Aleksandrov X-Patchwork-Id: 12386389 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8017BC07E9D for ; Mon, 19 Jul 2021 17:22:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 61359610A5 for ; Mon, 19 Jul 2021 17:22:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1355806AbhGSQj5 (ORCPT ); Mon, 19 Jul 2021 12:39:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36250 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345368AbhGSQgg (ORCPT ); Mon, 19 Jul 2021 12:36:36 -0400 Received: from mail-ed1-x536.google.com (mail-ed1-x536.google.com [IPv6:2a00:1450:4864:20::536]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 386CBC07880E for ; Mon, 19 Jul 2021 09:49:55 -0700 (PDT) Received: by mail-ed1-x536.google.com with SMTP id k27so24974889edk.9 for ; Mon, 19 Jul 2021 10:10:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=blackwall-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=XWAMcGQgcWJL3j09SfjKb/XvvQ805bCMahly93jAMwg=; b=fslx44Dp1STX8Hakf2R6egp847gLkkW65fDlNgVYZ6QpnpYvhpUwGZ6Fu13ibGgRJp Z2HFHwjOpYfGTFVx+GN84UNVFbVFEOFRuAeUJcJuTBbZCcebyb9GkJw/Vy41OLCueBr1 ky/+e9L73QNnYTNcvWW60TxhF3UWJk5s+E0lFZ9R2m9NNkyI2N/t/d6qgIgzMRkdKqSO HI9/2mTzlQtAMosvDTtBeSsfdGmUFALSJMM3nzrIEtOGDUsZvxBn4nBzEDrTRNGvBPBB 5jrAx/6Br/r/sQJSV4DGLsbXg7nAyYabfYV+195pRP02k5AasTigKSD8Y2rbvm86WHIo dPKQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=XWAMcGQgcWJL3j09SfjKb/XvvQ805bCMahly93jAMwg=; b=d6guwLmZVGN5quiqXtBB4fnpuK8m4Hl62R4v5QIugpdzOGZeBvGUUSCQPBu73o+prW 6Y1c96ZtU/b1FTh93iP6K+B3fHRsjH5G8yLZUMdlAtpAO+aaulzjFCI15/El9o8fq0Za d1J594tO09GI2ryAF2dJbnKaFCU3RUHACA66hhJ1HJ8DXRLFzNEICZ5nhWc9R8eSeKjX 7aE2z2xEsmqqe9GZo72tOweneXmtVDpriwvovMvAtrnbFAIsDE3i+0fG01RZ5hFvmJ0t OvmsdyMkmi3YGQb7vZdAXVojOalI02kmfOk69WgV6f56nheYM+6PBa96lIGkOoMWH0q5 yFww== X-Gm-Message-State: AOAM530+rh8Y2KgsYvy67HOstQItK+n4Vll5Vak/3LQd3behl0nl88tW 0UkQozg2rIPuoSHEGV/uI1hpbql4ViWN3sLWJfU= X-Google-Smtp-Source: ABdhPJyG8JZKjVP1WNSfUjUl7sk4Ty6qx1bvItnBDMJHthFTCgXGUNTtBh9eMNoyUKigAyGD+YBfcA== X-Received: by 2002:aa7:ca44:: with SMTP id j4mr35001283edt.203.1626714607133; Mon, 19 Jul 2021 10:10:07 -0700 (PDT) Received: from debil.vdiclient.nvidia.com (84-238-136-197.ip.btc-net.bg. [84.238.136.197]) by smtp.gmail.com with ESMTPSA id nc29sm6073896ejc.10.2021.07.19.10.10.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Jul 2021 10:10:06 -0700 (PDT) From: Nikolay Aleksandrov To: netdev@vger.kernel.org Cc: roopa@nvidia.com, bridge@lists.linux-foundation.org, Nikolay Aleksandrov Subject: [PATCH net-next 11/15] net: bridge: multicast: include router port vlan id in notifications Date: Mon, 19 Jul 2021 20:06:33 +0300 Message-Id: <20210719170637.435541-12-razor@blackwall.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210719170637.435541-1-razor@blackwall.org> References: <20210719170637.435541-1-razor@blackwall.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Nikolay Aleksandrov Use the port multicast context to check if the router port is a vlan and in case it is include its vlan id in the notification. Signed-off-by: Nikolay Aleksandrov --- include/uapi/linux/if_bridge.h | 1 + net/bridge/br_mdb.c | 29 ++++++++++++++++++++++------- net/bridge/br_multicast.c | 4 ++-- net/bridge/br_private.h | 2 +- 4 files changed, 26 insertions(+), 10 deletions(-) diff --git a/include/uapi/linux/if_bridge.h b/include/uapi/linux/if_bridge.h index 57a63a1572e0..1f7513300cfe 100644 --- a/include/uapi/linux/if_bridge.h +++ b/include/uapi/linux/if_bridge.h @@ -629,6 +629,7 @@ enum { MDBA_ROUTER_PATTR_TYPE, MDBA_ROUTER_PATTR_INET_TIMER, MDBA_ROUTER_PATTR_INET6_TIMER, + MDBA_ROUTER_PATTR_VID, __MDBA_ROUTER_PATTR_MAX }; #define MDBA_ROUTER_PATTR_MAX (__MDBA_ROUTER_PATTR_MAX - 1) diff --git a/net/bridge/br_mdb.c b/net/bridge/br_mdb.c index 5319587198eb..d3383a47a2f2 100644 --- a/net/bridge/br_mdb.c +++ b/net/bridge/br_mdb.c @@ -781,12 +781,12 @@ void br_mdb_notify(struct net_device *dev, static int nlmsg_populate_rtr_fill(struct sk_buff *skb, struct net_device *dev, - int ifindex, u32 pid, + int ifindex, u16 vid, u32 pid, u32 seq, int type, unsigned int flags) { + struct nlattr *nest, *port_nest; struct br_port_msg *bpm; struct nlmsghdr *nlh; - struct nlattr *nest; nlh = nlmsg_put(skb, pid, seq, type, sizeof(*bpm), 0); if (!nlh) @@ -800,8 +800,18 @@ static int nlmsg_populate_rtr_fill(struct sk_buff *skb, if (!nest) goto cancel; - if (nla_put_u32(skb, MDBA_ROUTER_PORT, ifindex)) + port_nest = nla_nest_start_noflag(skb, MDBA_ROUTER_PORT); + if (!port_nest) + goto end; + if (nla_put_nohdr(skb, sizeof(u32), &ifindex)) { + nla_nest_cancel(skb, port_nest); goto end; + } + if (vid && nla_put_u16(skb, MDBA_ROUTER_PATTR_VID, vid)) { + nla_nest_cancel(skb, port_nest); + goto end; + } + nla_nest_end(skb, port_nest); nla_nest_end(skb, nest); nlmsg_end(skb, nlh); @@ -817,23 +827,28 @@ static int nlmsg_populate_rtr_fill(struct sk_buff *skb, static inline size_t rtnl_rtr_nlmsg_size(void) { return NLMSG_ALIGN(sizeof(struct br_port_msg)) - + nla_total_size(sizeof(__u32)); + + nla_total_size(sizeof(__u32)) + + nla_total_size(sizeof(u16)); } -void br_rtr_notify(struct net_device *dev, struct net_bridge_port *port, +void br_rtr_notify(struct net_device *dev, struct net_bridge_mcast_port *pmctx, int type) { struct net *net = dev_net(dev); struct sk_buff *skb; int err = -ENOBUFS; int ifindex; + u16 vid; - ifindex = port ? port->dev->ifindex : 0; + ifindex = pmctx ? pmctx->port->dev->ifindex : 0; + vid = pmctx && br_multicast_port_ctx_is_vlan(pmctx) ? pmctx->vlan->vid : + 0; skb = nlmsg_new(rtnl_rtr_nlmsg_size(), GFP_ATOMIC); if (!skb) goto errout; - err = nlmsg_populate_rtr_fill(skb, dev, ifindex, 0, 0, type, NTF_SELF); + err = nlmsg_populate_rtr_fill(skb, dev, ifindex, vid, 0, 0, type, + NTF_SELF); if (err < 0) { kfree_skb(skb); goto errout; diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c index 9d4a18a711e4..fb5e5df571fd 100644 --- a/net/bridge/br_multicast.c +++ b/net/bridge/br_multicast.c @@ -2979,7 +2979,7 @@ static void br_multicast_add_router(struct net_bridge_mcast *brmctx, * IPv4 or IPv6 multicast router. */ if (br_multicast_no_router_otherpf(pmctx, rlist)) { - br_rtr_notify(pmctx->port->br->dev, pmctx->port, RTM_NEWMDB); + br_rtr_notify(pmctx->port->br->dev, pmctx, RTM_NEWMDB); br_port_mc_router_state_change(pmctx->port, true); } } @@ -4078,7 +4078,7 @@ br_multicast_rport_del_notify(struct net_bridge_mcast_port *pmctx, bool deleted) return; #endif - br_rtr_notify(pmctx->port->br->dev, pmctx->port, RTM_DELMDB); + br_rtr_notify(pmctx->port->br->dev, pmctx, RTM_DELMDB); br_port_mc_router_state_change(pmctx->port, false); /* don't allow timer refresh */ diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h index 00b93fcc7870..5515b6c37322 100644 --- a/net/bridge/br_private.h +++ b/net/bridge/br_private.h @@ -885,7 +885,7 @@ int br_mdb_hash_init(struct net_bridge *br); void br_mdb_hash_fini(struct net_bridge *br); void br_mdb_notify(struct net_device *dev, struct net_bridge_mdb_entry *mp, struct net_bridge_port_group *pg, int type); -void br_rtr_notify(struct net_device *dev, struct net_bridge_port *port, +void br_rtr_notify(struct net_device *dev, struct net_bridge_mcast_port *pmctx, int type); void br_multicast_del_pg(struct net_bridge_mdb_entry *mp, struct net_bridge_port_group *pg, From patchwork Mon Jul 19 17:06:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nikolay Aleksandrov X-Patchwork-Id: 12386381 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CECDFC64997 for ; Mon, 19 Jul 2021 17:22:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B4F656121E for ; Mon, 19 Jul 2021 17:22:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1355874AbhGSQkE (ORCPT ); Mon, 19 Jul 2021 12:40:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36162 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347835AbhGSQgh (ORCPT ); Mon, 19 Jul 2021 12:36:37 -0400 Received: from mail-ej1-x631.google.com (mail-ej1-x631.google.com [IPv6:2a00:1450:4864:20::631]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4E469C0613AF for ; Mon, 19 Jul 2021 09:49:56 -0700 (PDT) Received: by mail-ej1-x631.google.com with SMTP id hr1so29953509ejc.1 for ; Mon, 19 Jul 2021 10:10:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=blackwall-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=29TqZv5oN4lRjIuLVLlHSdl22CZNOehTdPTwu5IVoLg=; b=UgKnCiitiLfGRIgT44411Dx5uiabkLnWL/K6/F6ItGI+xdJa+ctfbwl6hogSiD6Qhs OamGHqEACBiOsrfqae9Xd0hd4qHrbOSMDUs9iCgqDXol3LxZ4s406aUFW+MS8mYS3+uV befDHIrIZNIzmkRRH/GDkwT18p+0nP2YYw0yN388X5vJkxVbR2Dz2PuldE5uTDgts5lm Vy85Tev8e/kBHccEc55MVi4sPDXrD4zwopr4HKuVad76uTF66vu6kDCqhS+HqYRQOLXL cV0+Myv10OjpvL1km+yTu3vyb8cVIUyBReWTIHANrG1O4GGlaijBpuJL8nEELHdH741y QAyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=29TqZv5oN4lRjIuLVLlHSdl22CZNOehTdPTwu5IVoLg=; b=sCWc2kFNA+o6tSdjC9BCN0lrlJAnhdVP0TtYH3fTSb3GySRPn/6p8fwN2ndoqCWVgL 80W0siRxkyL5ADCN9YXDeU5W+ndhDhZERWkEGSDlHGP0OTQVvm3GXN9lv/0DnLws9QrQ 1RLFUPCa4vPeQdNMv5lmR/YVL6VE14dlLUtu4mcaD12Uy/3eu1S5W3ZdxbxGfFN/w60y I0eLOtx+dDJynoi9AvRAmTmkTYdsa8pCk5VP+kG9rgvu6HERO7sxB+k0g6h5VY/AYRTw TnvRBqCKs1nkMfDhuAbdiWDI51p24WSliNxJzYpW5AYv4BPzS9jZQanM+ObbIgeZReGD ZnfQ== X-Gm-Message-State: AOAM532J5m9gei9v7jpHnKdu6yS6/tH9QUaTkgu8tC8b5Ic0GDlyQ6x1 7ZR2BM8c5vu9961Ei86A8vUn6IJosCCfVCCDxkU= X-Google-Smtp-Source: ABdhPJxYxJjv1X81gq8ReaRbBW1NnkwEMPa46l5HvYHHzi/S1jUnT0sViircGsm38aNkPlVYEawnfA== X-Received: by 2002:a17:907:62a5:: with SMTP id nd37mr27649202ejc.148.1626714608131; Mon, 19 Jul 2021 10:10:08 -0700 (PDT) Received: from debil.vdiclient.nvidia.com (84-238-136-197.ip.btc-net.bg. [84.238.136.197]) by smtp.gmail.com with ESMTPSA id nc29sm6073896ejc.10.2021.07.19.10.10.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Jul 2021 10:10:07 -0700 (PDT) From: Nikolay Aleksandrov To: netdev@vger.kernel.org Cc: roopa@nvidia.com, bridge@lists.linux-foundation.org, Nikolay Aleksandrov Subject: [PATCH net-next 12/15] net: bridge: vlan: add support for global options Date: Mon, 19 Jul 2021 20:06:34 +0300 Message-Id: <20210719170637.435541-13-razor@blackwall.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210719170637.435541-1-razor@blackwall.org> References: <20210719170637.435541-1-razor@blackwall.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Nikolay Aleksandrov We can have two types of vlan options depending on context: - per-device vlan options (split in per-bridge and per-port) - global vlan options The second type wasn't supported in the bridge until now, but we need them for per-vlan multicast support, per-vlan STP support and other options which require global vlan context. They are contained in the global bridge vlan context even if the vlan is not configured on the bridge device itself. This patch adds initial netlink attributes and support for setting these global vlan options, they can only be set (RTM_NEWVLAN) and the operation must use the bridge device. Since there are no such options yet it shouldn't have any functional effect. Signed-off-by: Nikolay Aleksandrov --- include/uapi/linux/if_bridge.h | 13 ++++++ net/bridge/br_private.h | 4 ++ net/bridge/br_vlan.c | 16 +++++-- net/bridge/br_vlan_options.c | 85 ++++++++++++++++++++++++++++++++++ 4 files changed, 115 insertions(+), 3 deletions(-) diff --git a/include/uapi/linux/if_bridge.h b/include/uapi/linux/if_bridge.h index 1f7513300cfe..760b264bd03d 100644 --- a/include/uapi/linux/if_bridge.h +++ b/include/uapi/linux/if_bridge.h @@ -485,10 +485,15 @@ enum { * [BRIDGE_VLANDB_ENTRY_INFO] * ... * } + * [BRIDGE_VLANDB_GLOBAL_OPTIONS] = { + * [BRIDGE_VLANDB_GOPTS_ID] + * ... + * } */ enum { BRIDGE_VLANDB_UNSPEC, BRIDGE_VLANDB_ENTRY, + BRIDGE_VLANDB_GLOBAL_OPTIONS, __BRIDGE_VLANDB_MAX, }; #define BRIDGE_VLANDB_MAX (__BRIDGE_VLANDB_MAX - 1) @@ -538,6 +543,14 @@ enum { }; #define BRIDGE_VLANDB_STATS_MAX (__BRIDGE_VLANDB_STATS_MAX - 1) +enum { + BRIDGE_VLANDB_GOPTS_UNSPEC, + BRIDGE_VLANDB_GOPTS_ID, + BRIDGE_VLANDB_GOPTS_RANGE, + __BRIDGE_VLANDB_GOPTS_MAX +}; +#define BRIDGE_VLANDB_GOPTS_MAX (__BRIDGE_VLANDB_GOPTS_MAX - 1) + /* Bridge multicast database attributes * [MDBA_MDB] = { * [MDBA_MDB_ENTRY] = { diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h index 5515b6c37322..bc920bc1ff44 100644 --- a/net/bridge/br_private.h +++ b/net/bridge/br_private.h @@ -1605,6 +1605,10 @@ int br_vlan_process_options(const struct net_bridge *br, struct net_bridge_vlan *range_end, struct nlattr **tb, struct netlink_ext_ack *extack); +int br_vlan_rtm_process_global_options(struct net_device *dev, + const struct nlattr *attr, + int cmd, + struct netlink_ext_ack *extack); /* vlan state manipulation helpers using *_ONCE to annotate lock-free access */ static inline u8 br_vlan_get_state(const struct net_bridge_vlan *v) diff --git a/net/bridge/br_vlan.c b/net/bridge/br_vlan.c index ab4969a4a380..dcb5acf783d2 100644 --- a/net/bridge/br_vlan.c +++ b/net/bridge/br_vlan.c @@ -2203,12 +2203,22 @@ static int br_vlan_rtm_process(struct sk_buff *skb, struct nlmsghdr *nlh, } nlmsg_for_each_attr(attr, nlh, sizeof(*bvm), rem) { - if (nla_type(attr) != BRIDGE_VLANDB_ENTRY) + switch (nla_type(attr)) { + case BRIDGE_VLANDB_ENTRY: + err = br_vlan_rtm_process_one(dev, attr, + nlh->nlmsg_type, + extack); + break; + case BRIDGE_VLANDB_GLOBAL_OPTIONS: + err = br_vlan_rtm_process_global_options(dev, attr, + nlh->nlmsg_type, + extack); + break; + default: continue; + } vlans++; - err = br_vlan_rtm_process_one(dev, attr, nlh->nlmsg_type, - extack); if (err) break; } diff --git a/net/bridge/br_vlan_options.c b/net/bridge/br_vlan_options.c index b4add9ea8964..a7d5a2334207 100644 --- a/net/bridge/br_vlan_options.c +++ b/net/bridge/br_vlan_options.c @@ -258,3 +258,88 @@ int br_vlan_process_options(const struct net_bridge *br, return err; } + +static int br_vlan_process_global_one_opts(const struct net_bridge *br, + struct net_bridge_vlan_group *vg, + struct net_bridge_vlan *v, + struct nlattr **tb, + bool *changed, + struct netlink_ext_ack *extack) +{ + *changed = false; + return 0; +} + +static const struct nla_policy br_vlan_db_gpol[BRIDGE_VLANDB_GOPTS_MAX + 1] = { + [BRIDGE_VLANDB_GOPTS_ID] = { .type = NLA_U16 }, + [BRIDGE_VLANDB_GOPTS_RANGE] = { .type = NLA_U16 }, +}; + +int br_vlan_rtm_process_global_options(struct net_device *dev, + const struct nlattr *attr, + int cmd, + struct netlink_ext_ack *extack) +{ + struct nlattr *tb[BRIDGE_VLANDB_GOPTS_MAX + 1]; + struct net_bridge_vlan_group *vg; + struct net_bridge_vlan *v; + u16 vid, vid_range = 0; + struct net_bridge *br; + int err = 0; + + if (cmd != RTM_NEWVLAN) { + NL_SET_ERR_MSG_MOD(extack, "Global vlan options support only set operation"); + return -EINVAL; + } + if (!netif_is_bridge_master(dev)) { + NL_SET_ERR_MSG_MOD(extack, "Global vlan options can only be set on bridge device"); + return -EINVAL; + } + br = netdev_priv(dev); + vg = br_vlan_group(br); + if (WARN_ON(!vg)) + return -ENODEV; + + err = nla_parse_nested(tb, BRIDGE_VLANDB_GOPTS_MAX, attr, + br_vlan_db_gpol, extack); + if (err) + return err; + + if (!tb[BRIDGE_VLANDB_GOPTS_ID]) { + NL_SET_ERR_MSG_MOD(extack, "Missing vlan entry id"); + return -EINVAL; + } + vid = nla_get_u16(tb[BRIDGE_VLANDB_GOPTS_ID]); + if (!br_vlan_valid_id(vid, extack)) + return -EINVAL; + + if (tb[BRIDGE_VLANDB_GOPTS_RANGE]) { + vid_range = nla_get_u16(tb[BRIDGE_VLANDB_GOPTS_RANGE]); + if (!br_vlan_valid_id(vid_range, extack)) + return -EINVAL; + if (vid >= vid_range) { + NL_SET_ERR_MSG_MOD(extack, "End vlan id is less than or equal to start vlan id"); + return -EINVAL; + } + } else { + vid_range = vid; + } + + for (; vid <= vid_range; vid++) { + bool changed = false; + + v = br_vlan_find(vg, vid); + if (!v) { + NL_SET_ERR_MSG_MOD(extack, "Vlan in range doesn't exist, can't process global options"); + err = -ENOENT; + break; + } + + err = br_vlan_process_global_one_opts(br, vg, v, tb, &changed, + extack); + if (err) + break; + } + + return err; +} From patchwork Mon Jul 19 17:06:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nikolay Aleksandrov X-Patchwork-Id: 12386377 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5515C64980 for ; Mon, 19 Jul 2021 17:22:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8C34E6108B for ; Mon, 19 Jul 2021 17:22:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1355838AbhGSQkB (ORCPT ); Mon, 19 Jul 2021 12:40:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36220 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347591AbhGSQgh (ORCPT ); Mon, 19 Jul 2021 12:36:37 -0400 Received: from mail-ej1-x62c.google.com (mail-ej1-x62c.google.com [IPv6:2a00:1450:4864:20::62c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3A4E6C078813 for ; Mon, 19 Jul 2021 09:49:57 -0700 (PDT) Received: by mail-ej1-x62c.google.com with SMTP id oz7so26090002ejc.2 for ; Mon, 19 Jul 2021 10:10:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=blackwall-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0cf2LlS6jWfQkusBdzzkz/uvVmHVHUO2QoyYAj4uPiI=; b=Gkm9g8hVYMOhtrs9MTG+1zFy8TL/tamGGnCnIrlm+IOtoTAHVbv/K3knrcbhKNtLh1 DYOnkhmtPquPsFpuVbsVwOL79M2+a84h9oTZHx1EpcF9VHvWdGIudiRX9TStEKB57tP0 54OddR1Zf+EuQ8khnjxVeua9jStoybFiAOr/RaFx0KF5qjyW1xzR4BjUTnRCihyhB+ct XGQ0LCgYpOowd3ybZzlAyXFROfrQJE+ra+/fGseWvNfBwG9k2DJeACff9SvKldqR2nI4 y8VFTpUoI3nXTWu7F5Ellv5vh3lnO2jndI4tSXFpN8MgRH7ajwPrKWEsJbNguoihHsNO juwQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0cf2LlS6jWfQkusBdzzkz/uvVmHVHUO2QoyYAj4uPiI=; b=RBeY2cwwuAfL93ff0TTpHGkNFi58/OXFj5OaFUaaeR0vfeaYbpZph+MfJSE/dlzbJl 47XSW73I7h0231vvzu5xkbZ29l0On5DRx3RFe98Pn6fOU5YLw+gOGXzTpdXr4PdmOWKG 6gvvzDuUbAde1TLfGUQPzTXFBkJ1p/E8lpJaAqemIKExElxwDkP4MCI88feGTR6LC2ef w97ZrpW2272gwHyQP+qJS8ICEVeXYlBx12jaUxH0VwqLJyWixh2Zk+jkA4akW2UosXEs T5oXfLg/CtTkHOCvHBvs82Wy/0Q7TNaw18Gjya4QTAUcjRzqInk8Ve0Y7Uwe5oc9HasY Zq5Q== X-Gm-Message-State: AOAM532ospngJpGpYMFqvdwF7gew+8gB7aYTIlFrrMBjRSUpsg23d8WN UvOXVojg0CFjYZ5KI6tojo3D0pkkM8wKz/zPriE= X-Google-Smtp-Source: ABdhPJwtOFuV3FqU4fr00EqQNTXvoFiTqke8X4xPkej103amW1xUOkZ9vR0U74QcOE0oI6EH95REwQ== X-Received: by 2002:a17:906:2a04:: with SMTP id j4mr28744231eje.344.1626714609073; Mon, 19 Jul 2021 10:10:09 -0700 (PDT) Received: from debil.vdiclient.nvidia.com (84-238-136-197.ip.btc-net.bg. [84.238.136.197]) by smtp.gmail.com with ESMTPSA id nc29sm6073896ejc.10.2021.07.19.10.10.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Jul 2021 10:10:08 -0700 (PDT) From: Nikolay Aleksandrov To: netdev@vger.kernel.org Cc: roopa@nvidia.com, bridge@lists.linux-foundation.org, Nikolay Aleksandrov Subject: [PATCH net-next 13/15] net: bridge: vlan: add support for dumping global vlan options Date: Mon, 19 Jul 2021 20:06:35 +0300 Message-Id: <20210719170637.435541-14-razor@blackwall.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210719170637.435541-1-razor@blackwall.org> References: <20210719170637.435541-1-razor@blackwall.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Nikolay Aleksandrov Add a new vlan options dump flag which causes only global vlan options to be dumped. The dumps are done only with bridge devices, ports are ignored. They support vlan compression if the options in sequential vlans are equal (currently always true). Signed-off-by: Nikolay Aleksandrov --- include/uapi/linux/if_bridge.h | 1 + net/bridge/br_private.h | 4 ++++ net/bridge/br_vlan.c | 41 +++++++++++++++++++++++++++------- net/bridge/br_vlan_options.c | 31 +++++++++++++++++++++++++ 4 files changed, 69 insertions(+), 8 deletions(-) diff --git a/include/uapi/linux/if_bridge.h b/include/uapi/linux/if_bridge.h index 760b264bd03d..2203eb749d31 100644 --- a/include/uapi/linux/if_bridge.h +++ b/include/uapi/linux/if_bridge.h @@ -479,6 +479,7 @@ enum { /* flags used in BRIDGE_VLANDB_DUMP_FLAGS attribute to affect dumps */ #define BRIDGE_VLANDB_DUMPF_STATS (1 << 0) /* Include stats in the dump */ +#define BRIDGE_VLANDB_DUMPF_GLOBAL (1 << 1) /* Dump global vlan options only */ /* Bridge vlan RTM attributes * [BRIDGE_VLANDB_ENTRY] = { diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h index bc920bc1ff44..e0a982275a93 100644 --- a/net/bridge/br_private.h +++ b/net/bridge/br_private.h @@ -1609,6 +1609,10 @@ int br_vlan_rtm_process_global_options(struct net_device *dev, const struct nlattr *attr, int cmd, struct netlink_ext_ack *extack); +bool br_vlan_global_opts_can_enter_range(const struct net_bridge_vlan *v_curr, + const struct net_bridge_vlan *r_end); +bool br_vlan_global_opts_fill(struct sk_buff *skb, u16 vid, u16 vid_range, + const struct net_bridge_vlan *v_opts); /* vlan state manipulation helpers using *_ONCE to annotate lock-free access */ static inline u8 br_vlan_get_state(const struct net_bridge_vlan *v) diff --git a/net/bridge/br_vlan.c b/net/bridge/br_vlan.c index dcb5acf783d2..e66b004df763 100644 --- a/net/bridge/br_vlan.c +++ b/net/bridge/br_vlan.c @@ -1919,6 +1919,7 @@ static int br_vlan_dump_dev(const struct net_device *dev, u32 dump_flags) { struct net_bridge_vlan *v, *range_start = NULL, *range_end = NULL; + bool dump_global = !!(dump_flags & BRIDGE_VLANDB_DUMPF_GLOBAL); bool dump_stats = !!(dump_flags & BRIDGE_VLANDB_DUMPF_STATS); struct net_bridge_vlan_group *vg; int idx = 0, s_idx = cb->args[1]; @@ -1937,6 +1938,10 @@ static int br_vlan_dump_dev(const struct net_device *dev, vg = br_vlan_group_rcu(br); p = NULL; } else { + /* global options are dumped only for bridge devices */ + if (dump_global) + return 0; + p = br_port_get_rcu(dev); if (WARN_ON(!p)) return -EINVAL; @@ -1959,7 +1964,7 @@ static int br_vlan_dump_dev(const struct net_device *dev, /* idx must stay at range's beginning until it is filled in */ list_for_each_entry_rcu(v, &vg->vlan_list, vlist) { - if (!br_vlan_should_use(v)) + if (!dump_global && !br_vlan_should_use(v)) continue; if (idx < s_idx) { idx++; @@ -1972,8 +1977,21 @@ static int br_vlan_dump_dev(const struct net_device *dev, continue; } - if (dump_stats || v->vid == pvid || - !br_vlan_can_enter_range(v, range_end)) { + if (dump_global) { + if (br_vlan_global_opts_can_enter_range(v, range_end)) + continue; + if (!br_vlan_global_opts_fill(skb, range_start->vid, + range_end->vid, + range_start)) { + err = -EMSGSIZE; + break; + } + /* advance number of filled vlans */ + idx += range_end->vid - range_start->vid + 1; + + range_start = v; + } else if (dump_stats || v->vid == pvid || + !br_vlan_can_enter_range(v, range_end)) { u16 vlan_flags = br_vlan_flags(range_start, pvid); if (!br_vlan_fill_vids(skb, range_start->vid, @@ -1995,11 +2013,18 @@ static int br_vlan_dump_dev(const struct net_device *dev, * - last vlan (range_start == range_end, not in range) * - last vlan range (range_start != range_end, in range) */ - if (!err && range_start && - !br_vlan_fill_vids(skb, range_start->vid, range_end->vid, - range_start, br_vlan_flags(range_start, pvid), - dump_stats)) - err = -EMSGSIZE; + if (!err && range_start) { + if (dump_global && + !br_vlan_global_opts_fill(skb, range_start->vid, + range_end->vid, range_start)) + err = -EMSGSIZE; + else if (!dump_global && + !br_vlan_fill_vids(skb, range_start->vid, + range_end->vid, range_start, + br_vlan_flags(range_start, pvid), + dump_stats)) + err = -EMSGSIZE; + } cb->args[1] = err ? idx : 0; diff --git a/net/bridge/br_vlan_options.c b/net/bridge/br_vlan_options.c index a7d5a2334207..f290f5140547 100644 --- a/net/bridge/br_vlan_options.c +++ b/net/bridge/br_vlan_options.c @@ -259,6 +259,37 @@ int br_vlan_process_options(const struct net_bridge *br, return err; } +bool br_vlan_global_opts_can_enter_range(const struct net_bridge_vlan *v_curr, + const struct net_bridge_vlan *r_end) +{ + return v_curr->vid - r_end->vid == 1; +} + +bool br_vlan_global_opts_fill(struct sk_buff *skb, u16 vid, u16 vid_range, + const struct net_bridge_vlan *v_opts) +{ + struct nlattr *nest; + + nest = nla_nest_start(skb, BRIDGE_VLANDB_GLOBAL_OPTIONS); + if (!nest) + return false; + + if (nla_put_u16(skb, BRIDGE_VLANDB_GOPTS_ID, vid)) + goto out_err; + + if (vid_range && vid < vid_range && + nla_put_u16(skb, BRIDGE_VLANDB_GOPTS_RANGE, vid_range)) + goto out_err; + + nla_nest_end(skb, nest); + + return true; + +out_err: + nla_nest_cancel(skb, nest); + return false; +} + static int br_vlan_process_global_one_opts(const struct net_bridge *br, struct net_bridge_vlan_group *vg, struct net_bridge_vlan *v, From patchwork Mon Jul 19 17:06:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nikolay Aleksandrov X-Patchwork-Id: 12386399 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 47B36C65BB0 for ; Mon, 19 Jul 2021 17:22:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2E77E61166 for ; Mon, 19 Jul 2021 17:22:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352411AbhGSQlx (ORCPT ); Mon, 19 Jul 2021 12:41:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36062 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343974AbhGSQgk (ORCPT ); Mon, 19 Jul 2021 12:36:40 -0400 Received: from mail-ej1-x62b.google.com (mail-ej1-x62b.google.com [IPv6:2a00:1450:4864:20::62b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3AA6AC078814 for ; Mon, 19 Jul 2021 09:49:58 -0700 (PDT) Received: by mail-ej1-x62b.google.com with SMTP id ga14so29924850ejc.6 for ; Mon, 19 Jul 2021 10:10:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=blackwall-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=yYcyZCiZBQoPwEo8PkgKkU8xnppoooSri5YHE+fHaaM=; b=AupFKrEpi6DreNMsGKaMtPutFkruCxWEvwt+3S1Zxpl0IvbvzbLc1bU+LfkLQeSoY6 zP/Z+sTsONL8VQWUupHdRSsgXrRTot0vDuiND8kkTLV9EOyAc61xSTPyPeWtntRbNHfr ec78lILw6avc1KGvLbf/HFQvFVPyMRIy7xeNU19FQZpLR+wt/0y/xSjm2PCP4+VTdQLz roh5BGuCLUws1dtISXl6CbTO8jQvLg6cYBS6J0uayprHps2SpLxkv3KcEdUULkgSxh0i 5SddzeFNXnESid5y5N141oQ7bssH6S6d9uNU0E7ivgaGpBJobVHVDQYqIUNkVbrKnkN2 QzGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=yYcyZCiZBQoPwEo8PkgKkU8xnppoooSri5YHE+fHaaM=; b=NR0KvMF1e+YUUyEzZ+JU90kRePtUm0uOABehYiFlZTHdNcDR6SvUzJvW7SOXN+kh5r 3CXoHkrWR0eYxTQek/o6Yp3Yvnt2mH6ix2zbirNiLx9SV2AguMEVGYwSheZ2CzWME8TE BwPMBXgTrQaS7Fj90t8QiW9VuQsE2ibfKPgjKEIExJL32VIMVNyy6Ey4piy/+bDkA/dC UQn3Dm5qSyQw1oUii5AVttoiuul/H4ga97JsKhKA9RJEowKelEPIYvoE7WJWsaueHD6a Ywb1OeJvTnThNJo1N3gaupp7iiV28lGBVjD1cyNLr+xC0B7nLVj0YxGfCVco99NR3Pfd t9SQ== X-Gm-Message-State: AOAM532NBWp+OOVMngIOD5XUv7LbLm9rONiUm9uU+hOPhC69IDIYjzMD rK1igL9icG8h8LKttqYoNGW5M5o1POU674+pqJg= X-Google-Smtp-Source: ABdhPJz5mxJOjDRwj89onF7oeVaPY+TyRNftAU4jfZWFT5dAcoqtAqJV5ng6GQuHM0+KXcVjA6heVQ== X-Received: by 2002:a17:906:6b1b:: with SMTP id q27mr27854438ejr.169.1626714610011; Mon, 19 Jul 2021 10:10:10 -0700 (PDT) Received: from debil.vdiclient.nvidia.com (84-238-136-197.ip.btc-net.bg. [84.238.136.197]) by smtp.gmail.com with ESMTPSA id nc29sm6073896ejc.10.2021.07.19.10.10.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Jul 2021 10:10:09 -0700 (PDT) From: Nikolay Aleksandrov To: netdev@vger.kernel.org Cc: roopa@nvidia.com, bridge@lists.linux-foundation.org, Nikolay Aleksandrov Subject: [PATCH net-next 14/15] net: bridge: vlan: notify when global options change Date: Mon, 19 Jul 2021 20:06:36 +0300 Message-Id: <20210719170637.435541-15-razor@blackwall.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210719170637.435541-1-razor@blackwall.org> References: <20210719170637.435541-1-razor@blackwall.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Nikolay Aleksandrov Add support for global options notifications. They use only RTM_NEWVLAN since global options can only be set and are contained in a separate vlan global options attribute. Notifications are compressed in ranges where possible, i.e. the sequential vlan options are equal. Signed-off-by: Nikolay Aleksandrov --- net/bridge/br_vlan_options.c | 80 +++++++++++++++++++++++++++++++++++- 1 file changed, 79 insertions(+), 1 deletion(-) diff --git a/net/bridge/br_vlan_options.c b/net/bridge/br_vlan_options.c index f290f5140547..827bfc319599 100644 --- a/net/bridge/br_vlan_options.c +++ b/net/bridge/br_vlan_options.c @@ -290,6 +290,57 @@ bool br_vlan_global_opts_fill(struct sk_buff *skb, u16 vid, u16 vid_range, return false; } +static size_t rtnl_vlan_global_opts_nlmsg_size(void) +{ + return NLMSG_ALIGN(sizeof(struct br_vlan_msg)) + + nla_total_size(0) /* BRIDGE_VLANDB_GLOBAL_OPTIONS */ + + nla_total_size(sizeof(u16)) /* BRIDGE_VLANDB_GOPTS_ID */ + + nla_total_size(sizeof(u16)); /* BRIDGE_VLANDB_GOPTS_RANGE */ +} + +static void br_vlan_global_opts_notify(const struct net_bridge *br, + u16 vid, u16 vid_range) +{ + struct net_bridge_vlan *v; + struct br_vlan_msg *bvm; + struct nlmsghdr *nlh; + struct sk_buff *skb; + int err = -ENOBUFS; + + /* right now notifications are done only with rtnl held */ + ASSERT_RTNL(); + + skb = nlmsg_new(rtnl_vlan_global_opts_nlmsg_size(), GFP_KERNEL); + if (!skb) + goto out_err; + + err = -EMSGSIZE; + nlh = nlmsg_put(skb, 0, 0, RTM_NEWVLAN, sizeof(*bvm), 0); + if (!nlh) + goto out_err; + bvm = nlmsg_data(nlh); + memset(bvm, 0, sizeof(*bvm)); + bvm->family = AF_BRIDGE; + bvm->ifindex = br->dev->ifindex; + + /* need to find the vlan due to flags/options */ + v = br_vlan_find(br_vlan_group(br), vid); + if (!v) + goto out_kfree; + + if (!br_vlan_global_opts_fill(skb, vid, vid_range, v)) + goto out_err; + + nlmsg_end(skb, nlh); + rtnl_notify(skb, dev_net(br->dev), 0, RTNLGRP_BRVLAN, NULL, GFP_KERNEL); + return; + +out_err: + rtnl_set_sk_err(dev_net(br->dev), RTNLGRP_BRVLAN, err); +out_kfree: + kfree_skb(skb); +} + static int br_vlan_process_global_one_opts(const struct net_bridge *br, struct net_bridge_vlan_group *vg, struct net_bridge_vlan *v, @@ -311,9 +362,9 @@ int br_vlan_rtm_process_global_options(struct net_device *dev, int cmd, struct netlink_ext_ack *extack) { + struct net_bridge_vlan *v, *curr_start = NULL, *curr_end = NULL; struct nlattr *tb[BRIDGE_VLANDB_GOPTS_MAX + 1]; struct net_bridge_vlan_group *vg; - struct net_bridge_vlan *v; u16 vid, vid_range = 0; struct net_bridge *br; int err = 0; @@ -370,7 +421,34 @@ int br_vlan_rtm_process_global_options(struct net_device *dev, extack); if (err) break; + + if (changed) { + /* vlan options changed, check for range */ + if (!curr_start) { + curr_start = v; + curr_end = v; + continue; + } + + if (!br_vlan_global_opts_can_enter_range(v, curr_end)) { + br_vlan_global_opts_notify(br, curr_start->vid, + curr_end->vid); + curr_start = v; + } + curr_end = v; + } else { + /* nothing changed and nothing to notify yet */ + if (!curr_start) + continue; + + br_vlan_global_opts_notify(br, curr_start->vid, + curr_end->vid); + curr_start = NULL; + curr_end = NULL; + } } + if (curr_start) + br_vlan_global_opts_notify(br, curr_start->vid, curr_end->vid); return err; } From patchwork Mon Jul 19 17:06:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nikolay Aleksandrov X-Patchwork-Id: 12386395 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57D6BC64995 for ; Mon, 19 Jul 2021 17:22:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4539861073 for ; Mon, 19 Jul 2021 17:22:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1355856AbhGSQkD (ORCPT ); Mon, 19 Jul 2021 12:40:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36002 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347655AbhGSQgh (ORCPT ); Mon, 19 Jul 2021 12:36:37 -0400 Received: from mail-ed1-x529.google.com (mail-ed1-x529.google.com [IPv6:2a00:1450:4864:20::529]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 31FF1C078815 for ; Mon, 19 Jul 2021 09:49:59 -0700 (PDT) Received: by mail-ed1-x529.google.com with SMTP id l1so24920622edr.11 for ; Mon, 19 Jul 2021 10:10:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=blackwall-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0cS0ncWc6bcud+3qIs2YIyb5eyTBqx/ONuOdJC+NdZ8=; b=UszvjO2bAF8To5WUKaEzDzCq0pmMgN/8dpu4fz8lD0zmDqPyKcT1SPHVb8LEuiaxKA aFK2V8By8nwabDkVkvuRa2KANVRNZWPD2otll2rSq8a79oQ46lDyW5W578d4yzPDvL7K NH9UPI6fbpeRKvT1G/5YafzafB2Bm2UuAq9MVVRytvIUB0xQKCD4hjR79zGtqrLzl/Vy NK6SYxzWBelUXjoJCflVm8EFPJBdcG1g/u06lp0Zzwp+BZ3l8KL/ohJ5rQPkMRO/NLz9 ChsvE1MeoBnRNoXCIpa9lCZ6OddOX0tolZEkveTyQZVY99tt2jXnbtvlvuGDrOiLt7sy mZ8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0cS0ncWc6bcud+3qIs2YIyb5eyTBqx/ONuOdJC+NdZ8=; b=qeDf8O/wm9OsAGWGJL3W68N/C0hGNLosv9h2D1mbFYlxoFGqH8kT2h36NQEWoLrCGd tiWKZUSXQCSSIPEX6KVSuA/CJajQ8gR63Ge/1uZ35RZOyEkMftuzpJNQIZPFHATov0dA RIDj2ZWPLrA26PoCt2/NP11H9zKq9UPOCZQAvzELcdkdcItx1VGrisljbq/UYFuXN2ai GMpvI0BicaphwC6ynPMeDOHm/y7SRsYuCld0BwPR6JcbZQBf/ZWTFTnH5aT5Jb+m5Rsf PHt8wwKoYDza42lyF9al2ZQkVelLYifRAqkdKhX0ftyB1C/ZGZq8yxHBmQSCr4lnx/n2 7rwA== X-Gm-Message-State: AOAM530Ex+CVA0qJGTmdbN/kYOPk+DpG4JoejblnFBWszgn04+gd0dFY WiugCkx735HHjiz5hwMdl0uOmH13Ia8glEznaOM= X-Google-Smtp-Source: ABdhPJyS+jybb2Yn3IdrVTn1U3OuNfySNDv14l/F42JYIT7JS/v+HCbFpFRqLJ8/0pHAlOxVOxIKoA== X-Received: by 2002:a50:fd17:: with SMTP id i23mr35868122eds.270.1626714610878; Mon, 19 Jul 2021 10:10:10 -0700 (PDT) Received: from debil.vdiclient.nvidia.com (84-238-136-197.ip.btc-net.bg. [84.238.136.197]) by smtp.gmail.com with ESMTPSA id nc29sm6073896ejc.10.2021.07.19.10.10.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Jul 2021 10:10:10 -0700 (PDT) From: Nikolay Aleksandrov To: netdev@vger.kernel.org Cc: roopa@nvidia.com, bridge@lists.linux-foundation.org, Nikolay Aleksandrov Subject: [PATCH net-next 15/15] net: bridge: vlan: add mcast snooping control Date: Mon, 19 Jul 2021 20:06:37 +0300 Message-Id: <20210719170637.435541-16-razor@blackwall.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210719170637.435541-1-razor@blackwall.org> References: <20210719170637.435541-1-razor@blackwall.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Nikolay Aleksandrov Add a new global vlan option which controls whether multicast snooping is enabled or disabled for a single vlan. It controls the vlan private flag: BR_VLFLAG_GLOBAL_MCAST_ENABLED. Signed-off-by: Nikolay Aleksandrov --- include/uapi/linux/if_bridge.h | 1 + net/bridge/br_multicast.c | 16 ++++++++++++++++ net/bridge/br_private.h | 7 +++++++ net/bridge/br_vlan_options.c | 24 +++++++++++++++++++++++- 4 files changed, 47 insertions(+), 1 deletion(-) diff --git a/include/uapi/linux/if_bridge.h b/include/uapi/linux/if_bridge.h index 2203eb749d31..f7997a3f7f82 100644 --- a/include/uapi/linux/if_bridge.h +++ b/include/uapi/linux/if_bridge.h @@ -548,6 +548,7 @@ enum { BRIDGE_VLANDB_GOPTS_UNSPEC, BRIDGE_VLANDB_GOPTS_ID, BRIDGE_VLANDB_GOPTS_RANGE, + BRIDGE_VLANDB_GOPTS_MCAST_SNOOPING, __BRIDGE_VLANDB_GOPTS_MAX }; #define BRIDGE_VLANDB_GOPTS_MAX (__BRIDGE_VLANDB_GOPTS_MAX - 1) diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c index fb5e5df571fd..976491951c82 100644 --- a/net/bridge/br_multicast.c +++ b/net/bridge/br_multicast.c @@ -3988,6 +3988,22 @@ int br_multicast_toggle_vlan_snooping(struct net_bridge *br, bool on, return 0; } +bool br_multicast_toggle_global_vlan(struct net_bridge_vlan *vlan, bool on) +{ + ASSERT_RTNL(); + + /* BR_VLFLAG_GLOBAL_MCAST_ENABLED relies on eventual consistency and + * requires only RTNL to change + */ + if (on == !!(vlan->priv_flags & BR_VLFLAG_GLOBAL_MCAST_ENABLED)) + return false; + + vlan->priv_flags ^= BR_VLFLAG_GLOBAL_MCAST_ENABLED; + br_multicast_toggle_vlan(vlan, on); + + return true; +} + void br_multicast_stop(struct net_bridge *br) { ASSERT_RTNL(); diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h index e0a982275a93..af1f5c1c6b88 100644 --- a/net/bridge/br_private.h +++ b/net/bridge/br_private.h @@ -922,6 +922,7 @@ void br_multicast_toggle_one_vlan(struct net_bridge_vlan *vlan, bool on); void br_multicast_toggle_vlan(struct net_bridge_vlan *vlan, bool on); int br_multicast_toggle_vlan_snooping(struct net_bridge *br, bool on, struct netlink_ext_ack *extack); +bool br_multicast_toggle_global_vlan(struct net_bridge_vlan *vlan, bool on); static inline bool br_group_is_l2(const struct br_ip *group) { @@ -1295,6 +1296,12 @@ static inline int br_multicast_toggle_vlan_snooping(struct net_bridge *br, { return -EOPNOTSUPP; } + +static inline bool br_multicast_toggle_global_vlan(struct net_bridge_vlan *vlan, + bool on) +{ + return false; +} #endif /* br_vlan.c */ diff --git a/net/bridge/br_vlan_options.c b/net/bridge/br_vlan_options.c index 827bfc319599..4ef975b20185 100644 --- a/net/bridge/br_vlan_options.c +++ b/net/bridge/br_vlan_options.c @@ -262,7 +262,9 @@ int br_vlan_process_options(const struct net_bridge *br, bool br_vlan_global_opts_can_enter_range(const struct net_bridge_vlan *v_curr, const struct net_bridge_vlan *r_end) { - return v_curr->vid - r_end->vid == 1; + return v_curr->vid - r_end->vid == 1 && + ((v_curr->priv_flags ^ r_end->priv_flags) & + BR_VLFLAG_GLOBAL_MCAST_ENABLED) == 0; } bool br_vlan_global_opts_fill(struct sk_buff *skb, u16 vid, u16 vid_range, @@ -281,6 +283,12 @@ bool br_vlan_global_opts_fill(struct sk_buff *skb, u16 vid, u16 vid_range, nla_put_u16(skb, BRIDGE_VLANDB_GOPTS_RANGE, vid_range)) goto out_err; +#ifdef CONFIG_BRIDGE_IGMP_SNOOPING + if (nla_put_u8(skb, BRIDGE_VLANDB_GOPTS_MCAST_SNOOPING, + !!(v_opts->priv_flags & BR_VLFLAG_GLOBAL_MCAST_ENABLED))) + goto out_err; +#endif + nla_nest_end(skb, nest); return true; @@ -295,6 +303,9 @@ static size_t rtnl_vlan_global_opts_nlmsg_size(void) return NLMSG_ALIGN(sizeof(struct br_vlan_msg)) + nla_total_size(0) /* BRIDGE_VLANDB_GLOBAL_OPTIONS */ + nla_total_size(sizeof(u16)) /* BRIDGE_VLANDB_GOPTS_ID */ +#ifdef CONFIG_BRIDGE_IGMP_SNOOPING + + nla_total_size(sizeof(u8)) /* BRIDGE_VLANDB_GOPTS_MCAST_SNOOPING */ +#endif + nla_total_size(sizeof(u16)); /* BRIDGE_VLANDB_GOPTS_RANGE */ } @@ -349,12 +360,23 @@ static int br_vlan_process_global_one_opts(const struct net_bridge *br, struct netlink_ext_ack *extack) { *changed = false; +#ifdef CONFIG_BRIDGE_IGMP_SNOOPING + if (tb[BRIDGE_VLANDB_GOPTS_MCAST_SNOOPING]) { + u8 mc_snooping; + + mc_snooping = nla_get_u8(tb[BRIDGE_VLANDB_GOPTS_MCAST_SNOOPING]); + if (br_multicast_toggle_global_vlan(v, !!mc_snooping)) + *changed = true; + } +#endif + return 0; } static const struct nla_policy br_vlan_db_gpol[BRIDGE_VLANDB_GOPTS_MAX + 1] = { [BRIDGE_VLANDB_GOPTS_ID] = { .type = NLA_U16 }, [BRIDGE_VLANDB_GOPTS_RANGE] = { .type = NLA_U16 }, + [BRIDGE_VLANDB_GOPTS_MCAST_SNOOPING] = { .type = NLA_U8 }, }; int br_vlan_rtm_process_global_options(struct net_device *dev,