From patchwork Wed Jan 31 12:35:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tobias Waldekranz X-Patchwork-Id: 13539401 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-lj1-f174.google.com (mail-lj1-f174.google.com [209.85.208.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 67A7C4F5F9 for ; Wed, 31 Jan 2024 12:35:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706704558; cv=none; b=DVeEkPvzt7bjIEODgu/LEWm8F6IdKgOqIpgYHam5XC9dpb3xuSNPfoMNWqXmEvMGoFEYuwFdw0le1OviUSRHvwtwj3aaj+kOSo70qub4su6Zp7HeokvxKs7p65QMBgt1m/xd2lcggs+Y1g4GMGp95NZ2DGUgB1MdTFG2X0ijNj4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706704558; c=relaxed/simple; bh=m26awy1qzqb5RQAB1yDRqj2kQi7kNDLCcpHamrlHnHU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=CJHgl5bqoJo95FQxo1dPYr0CUhbluFm8yuDrb65R4fQxYqpvLts0W5KtVVs9gmLvPJRSUn/jIZWRTT+R7UE3+N1pU/C05XsW/lJFN2mzKtJunhoXdDZjy4Km228tXFXA6sFPnmbY158SN8qlO+8MudOiDpF6G6KVNhAxrhjIvgU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=waldekranz.com; spf=pass smtp.mailfrom=waldekranz.com; dkim=pass (2048-bit key) header.d=waldekranz-com.20230601.gappssmtp.com header.i=@waldekranz-com.20230601.gappssmtp.com header.b=RhJKzWQT; arc=none smtp.client-ip=209.85.208.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=waldekranz.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=waldekranz.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=waldekranz-com.20230601.gappssmtp.com header.i=@waldekranz-com.20230601.gappssmtp.com header.b="RhJKzWQT" Received: by mail-lj1-f174.google.com with SMTP id 38308e7fff4ca-2cf1fd1cc5bso63006481fa.3 for ; Wed, 31 Jan 2024 04:35:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=waldekranz-com.20230601.gappssmtp.com; s=20230601; t=1706704554; x=1707309354; darn=vger.kernel.org; h=content-transfer-encoding:organization:mime-version:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=SMg02lM/CbIVQI4/Q7innU4UKi1k6bvYIXOkzFVqAIY=; b=RhJKzWQTRnJ7DbnExfJebO2C2yrym/sgbdTj4Y4A1DfivpekXZ46d4Z/IS2S3zddO4 84B01ACZF9dixUPI/JOmOQZFm+eEjDf/WI1ZS6EsshHELoSxYUm0YMboIpb1Jb10hxvZ jsZkEPHW/am9e/tJ+wQeoHGRD2mmt9eMbTOpRO4khHV4F80K8dcoAk/7KqQf7ThauGTh W+vbzYD688TOdY0C+IgEkMr+NYLWqLKZZMbWp7rc042Ol6ZZ5iVmfp1TFz6FBVbdhrUp 7VteN3cfdA5C6BogM/OtRcyi9WcHPiWPbALktdTCoKRQampaLNREUsGnMo6rOgHDfxmb XjYQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706704554; x=1707309354; h=content-transfer-encoding:organization:mime-version:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=SMg02lM/CbIVQI4/Q7innU4UKi1k6bvYIXOkzFVqAIY=; b=ZYPT+dEyW8/iG1FcUwhGeR4Z0qqMFeNwhGwz5BgBe++6L+eK27Pzlj8TI3rf2QUFRR vd3w1dgLOi/UnFn4A8wW559/yXfJsF9ILVz5HbQc+BOIqUehh0eLLt4hlf8vzBCnbL5f 565VvbOiYseIFg2Fz4RQt7zdxcoJIW3RdMX5JVK5VhLhV6Zy3PKj7WPdJ8k35x6LCqWf q1C6C5wLdUzt/YF4fwGONhA89kLWhPuOmMK4IZrNdIrdjjdj3g5rKwbXmFwN7TN/WS2J EPtg8tPYPFe5BUSLUcGFcpwOKAaAVn47R2nkaQ2LvUOnH9pWw4jAEdF055FDGgkvMhPR t6ag== X-Gm-Message-State: AOJu0Yyko6Q2rCeriLNqK8Lt2fF7ZE/zlxZW8KLnKd6D4CcBg2sEqpKk maVPME5E/3RbLTpZPPZS1bcWoPOovT3sZ+U79LS0XVxpiq7Lv0zIeERpPbVQJM8= X-Google-Smtp-Source: AGHT+IHWomoXHEp+Qg2CxVD90b+O9m4lCUVtbzGn+2E4KUqnP7hhN4yeseuwxawWVqJoTgZPt/AGsQ== X-Received: by 2002:a05:651c:2204:b0:2cf:1920:97 with SMTP id y4-20020a05651c220400b002cf19200097mr1447143ljq.12.1706704554198; Wed, 31 Jan 2024 04:35:54 -0800 (PST) X-Forwarded-Encrypted: i=0; AJvYcCWGLMF6P5Mp6lAIC2kZRj/qq/gF6bKcoOFdWHkMUpJx7hDdLeNN6U5GyK/0c4BPrSugfijzXKWHPgASckRfi95WTB5OpRQCF+bVehmbNlxfaGxYroA44U0OVHHzPMxhBvetqlYEiKYdarLItKwp0vt3SnyBoih14c5svwvg3mn9cu6Nz1km32020wDJCtdHFsSF9m9B30kb4zPLAaPPCyAu4ZdqlKYYewkI53YcNHYBm9dpAceGe8YX0sLwMA== Received: from wkz-x13.addiva.ad (a124.broadband3.quicknet.se. [46.17.184.124]) by smtp.gmail.com with ESMTPSA id w21-20020a2e9bd5000000b002cdf4797fb7sm1913517ljj.125.2024.01.31.04.35.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 31 Jan 2024 04:35:53 -0800 (PST) From: Tobias Waldekranz To: davem@davemloft.net, kuba@kernel.org Cc: olteanv@gmail.com, roopa@nvidia.com, razor@blackwall.org, bridge@lists.linux.dev, netdev@vger.kernel.org, jiri@resnulli.us, ivecera@redhat.com Subject: [PATCH net 1/2] net: switchdev: Add helper to check if an object event is pending Date: Wed, 31 Jan 2024 13:35:43 +0100 Message-Id: <20240131123544.462597-2-tobias@waldekranz.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240131123544.462597-1-tobias@waldekranz.com> References: <20240131123544.462597-1-tobias@waldekranz.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Organization: Addiva Elektronik X-Patchwork-Delegate: kuba@kernel.org When adding/removing a port to/from a bridge, the port must be brought up to speed with the current state of the bridge. This is done by replaying all relevant events, directly to the port in question. In some situations, specifically when replaying the MDB, this process may race against new events that are generated concurrently. So the bridge must ensure that the event is not already pending on the deferred queue. switchdev_port_obj_is_deferred answers this question. Signed-off-by: Tobias Waldekranz --- include/net/switchdev.h | 3 ++ net/switchdev/switchdev.c | 61 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 64 insertions(+) diff --git a/include/net/switchdev.h b/include/net/switchdev.h index a43062d4c734..538851a93d9e 100644 --- a/include/net/switchdev.h +++ b/include/net/switchdev.h @@ -308,6 +308,9 @@ void switchdev_deferred_process(void); int switchdev_port_attr_set(struct net_device *dev, const struct switchdev_attr *attr, struct netlink_ext_ack *extack); +bool switchdev_port_obj_is_deferred(struct net_device *dev, + enum switchdev_notifier_type nt, + const struct switchdev_obj *obj); int switchdev_port_obj_add(struct net_device *dev, const struct switchdev_obj *obj, struct netlink_ext_ack *extack); diff --git a/net/switchdev/switchdev.c b/net/switchdev/switchdev.c index 5b045284849e..40bb17c7fdbf 100644 --- a/net/switchdev/switchdev.c +++ b/net/switchdev/switchdev.c @@ -19,6 +19,35 @@ #include #include +static bool switchdev_obj_eq(const struct switchdev_obj *a, + const struct switchdev_obj *b) +{ + const struct switchdev_obj_port_vlan *va, *vb; + const struct switchdev_obj_port_mdb *ma, *mb; + + if (a->id != b->id || a->orig_dev != b->orig_dev) + return false; + + switch (a->id) { + case SWITCHDEV_OBJ_ID_PORT_VLAN: + va = SWITCHDEV_OBJ_PORT_VLAN(a); + vb = SWITCHDEV_OBJ_PORT_VLAN(b); + return va->flags == vb->flags && + va->vid == vb->vid && + va->changed == vb->changed; + case SWITCHDEV_OBJ_ID_PORT_MDB: + case SWITCHDEV_OBJ_ID_HOST_MDB: + ma = SWITCHDEV_OBJ_PORT_MDB(a); + mb = SWITCHDEV_OBJ_PORT_MDB(b); + return ma->vid == mb->vid && + !memcmp(ma->addr, mb->addr, sizeof(ma->addr)); + default: + break; + } + + BUG(); +} + static LIST_HEAD(deferred); static DEFINE_SPINLOCK(deferred_lock); @@ -307,6 +336,38 @@ int switchdev_port_obj_del(struct net_device *dev, } EXPORT_SYMBOL_GPL(switchdev_port_obj_del); +bool switchdev_port_obj_is_deferred(struct net_device *dev, + enum switchdev_notifier_type nt, + const struct switchdev_obj *obj) +{ + struct switchdev_deferred_item *dfitem; + bool found = false; + + ASSERT_RTNL(); + + spin_lock_bh(&deferred_lock); + + list_for_each_entry(dfitem, &deferred, list) { + if (dfitem->dev != dev) + continue; + + if ((dfitem->func == switchdev_port_obj_add_deferred && + nt == SWITCHDEV_PORT_OBJ_ADD) || + (dfitem->func == switchdev_port_obj_del_deferred && + nt == SWITCHDEV_PORT_OBJ_DEL)) { + if (switchdev_obj_eq((const void *)dfitem->data, obj)) { + found = true; + break; + } + } + } + + spin_unlock_bh(&deferred_lock); + + return found; +} +EXPORT_SYMBOL_GPL(switchdev_port_obj_is_deferred); + static ATOMIC_NOTIFIER_HEAD(switchdev_notif_chain); static BLOCKING_NOTIFIER_HEAD(switchdev_blocking_notif_chain); From patchwork Wed Jan 31 12:35:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tobias Waldekranz X-Patchwork-Id: 13539402 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-lj1-f170.google.com (mail-lj1-f170.google.com [209.85.208.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C612D7869F for ; Wed, 31 Jan 2024 12:35:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706704559; cv=none; b=CK0LaiYMnOXYAOe9xsqaF2iubR9d433AGecy9Jm83qg4Hy307fxc8o1UMSylbsnzn2m+f4qeiSBr7tfJHh6V/YRZCb4RCDHu4wVHaJFEh8GNbmhr2A/W/ju0B36AyNmIz3Of3GfRPnj3kW7X1PUV16wljETkXdNzgMUGKTu0xYc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706704559; c=relaxed/simple; bh=go1CxnwkYrNSMMXZA9us4VqyOB02NCR6uu21U6/jbb8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=CPnc2kO7kXpELo6WSk8vtLfDOIw3YHBSTScyJ9i7MVXRMDbivI9inZGqM1QxLrVrPVV0iXgnlkc+ZjboscODIA9bc6RzXt7zb0BG/WTRhtCWNaQAgu51W/9Fy49eZ6JV9iLJDN2e1UjK1UightqfhPAQhNFiaJccgMQ/S/JAFH8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=waldekranz.com; spf=pass smtp.mailfrom=waldekranz.com; dkim=pass (2048-bit key) header.d=waldekranz-com.20230601.gappssmtp.com header.i=@waldekranz-com.20230601.gappssmtp.com header.b=jc63w5pL; arc=none smtp.client-ip=209.85.208.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=waldekranz.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=waldekranz.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=waldekranz-com.20230601.gappssmtp.com header.i=@waldekranz-com.20230601.gappssmtp.com header.b="jc63w5pL" Received: by mail-lj1-f170.google.com with SMTP id 38308e7fff4ca-2d04fb2f36bso32039411fa.2 for ; Wed, 31 Jan 2024 04:35:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=waldekranz-com.20230601.gappssmtp.com; s=20230601; t=1706704556; x=1707309356; darn=vger.kernel.org; h=content-transfer-encoding:organization:mime-version:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=QEiIF2a77a2NYlBdF0rnvbhZ9XEvtTWVhxdh+i0lTps=; b=jc63w5pLzbv8MdKytEyo19ClyzhASBbNaO88XrZ3vtr+KToUgxyH1KESfjAc2wq8lG GCwwXXOHBDv074hYPyuj+p1lCQWxtw6WJrNOwNCWfik9fYNWidtx9WLDnzwVVnHSquag TCnIU2+dKIz4YNf2eMEhNYF8fqYYJBDUhEzz04NDIZh+iOV+75oUAjKwXJkNRxFoGJtD /FGbyFbTUfxXMSRs7SXW9teRhkmRPu7GJiDU57ZlDFYf8DDHzNvwap5VjkTgFKS+Gjrz g9chvTcXIXqrKL9pfdTk4laF/VaDZ+8+bW0pgXXEA4okapJbZUbdfwOM6sVzdgIqMR/o /j8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706704556; x=1707309356; h=content-transfer-encoding:organization:mime-version:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=QEiIF2a77a2NYlBdF0rnvbhZ9XEvtTWVhxdh+i0lTps=; b=Op2JMSha00fqc6jEKBXAyujrHLbtRmAQLTExP1AdWdOamKwcrsfKYaYk39yANg111a 3KDut7KwRzsRu87i++My5iHkuiNJOsSrEXVATtAD5lJOdZqz7dDIlzJU5ibr68T5OlRO GzMuKhA7SePmcUqM0QA+IF0yMj/9csWxA6+fLHX4R6vQS4FVjT/ixbrNUehzTyG/PgOo p5PsNH2PFCBPoixIwFc0oTk80ejz73GBsVf+4pVKqq2UwmNRa/A3qZOfr7YznCsKhlvd pImqnzMxC69cg+2ElAribmYrAA8GhktQB8/CxiXAlcUag46Ub1MtMVm7r85rWCKC7/d/ ZiLQ== X-Gm-Message-State: AOJu0YyFRSjdZDFRVN4dBQYiktSLtkWU+bo56yGcxR7BfJwrbmkK2ode MD+bXlYUcgnKZKFcrs50GakGEA8P/mhc/wk89q/iW6oQRFoMRSlb9qCKvsASA54= X-Google-Smtp-Source: AGHT+IF2YwYWLus8PuFNfKLdfhUbeyNe6T0Pteaa0qeEdznNQXcwqXUMLHRIieF9cyqckkxKrvd96g== X-Received: by 2002:a2e:7011:0:b0:2cf:2ef2:87f7 with SMTP id l17-20020a2e7011000000b002cf2ef287f7mr942673ljc.53.1706704555700; Wed, 31 Jan 2024 04:35:55 -0800 (PST) X-Forwarded-Encrypted: i=0; AJvYcCXKNd1C9bpqVIlZCyele8ttZpf1tBlvnerSbbAefCXaKrZ9qcLUqipgcsFtqQL0JE6dUPwKSW9ycDvkbhYZEyeBGqQQGQRmyNdWBEWtRO7bPAVKVJIH0G7DVWwRtCIWEUVUGEqGdCJCH1JCBL0Cl16qXWE39f+GQj/JSJzGr5GDGD9Yi/+/DLcpeIQk+wgowyHi9OjshEdiR0yVfEQsL1UbmUfHqoDaOW8jXidWxiLeTeVwKbJynh4YWiOYaQ== Received: from wkz-x13.addiva.ad (a124.broadband3.quicknet.se. [46.17.184.124]) by smtp.gmail.com with ESMTPSA id w21-20020a2e9bd5000000b002cdf4797fb7sm1913517ljj.125.2024.01.31.04.35.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 31 Jan 2024 04:35:54 -0800 (PST) From: Tobias Waldekranz To: davem@davemloft.net, kuba@kernel.org Cc: olteanv@gmail.com, roopa@nvidia.com, razor@blackwall.org, bridge@lists.linux.dev, netdev@vger.kernel.org, jiri@resnulli.us, ivecera@redhat.com Subject: [PATCH net 2/2] net: bridge: switchdev: Skip MDB replays of pending events Date: Wed, 31 Jan 2024 13:35:44 +0100 Message-Id: <20240131123544.462597-3-tobias@waldekranz.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240131123544.462597-1-tobias@waldekranz.com> References: <20240131123544.462597-1-tobias@waldekranz.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Organization: Addiva Elektronik X-Patchwork-Delegate: kuba@kernel.org Generating the list of events MDB to replay races against the IGMP/MLD snooping logic, which may concurrently enqueue events to the switchdev deferred queue, leading to duplicate events being sent to drivers. Avoid this by grabbing the write-side lock of the MDB, and make sure that a deferred version of a replay event is not already enqueued to the switchdev deferred queue before adding it to the replay list. An easy way to reproduce this issue, on an mv88e6xxx system, was to create a snooping bridge, and immediately add a port to it: root@infix-06-0b-00:~$ ip link add dev br0 up type bridge mcast_snooping 1 && \ > ip link set dev x3 up master br0 root@infix-06-0b-00:~$ ip link del dev br0 root@infix-06-0b-00:~$ mvls atu ADDRESS FID STATE Q F 0 1 2 3 4 5 6 7 8 9 a DEV:0 Marvell 88E6393X 33:33:00:00:00:6a 1 static - - 0 . . . . . . . . . . 33:33:ff:87:e4:3f 1 static - - 0 . . . . . . . . . . ff:ff:ff:ff:ff:ff 1 static - - 0 1 2 3 4 5 6 7 8 9 a root@infix-06-0b-00:~$ The two IPv6 groups remain in the hardware database because the port (x3) is notified of the host's membership twice: once in the original event and once in a replay. Since DSA tracks host addresses using reference counters, and only a single delete notification is sent, the count remains at 1 when the bridge is destroyed. Signed-off-by: Tobias Waldekranz --- net/bridge/br_switchdev.c | 44 ++++++++++++++++++++++++--------------- 1 file changed, 27 insertions(+), 17 deletions(-) diff --git a/net/bridge/br_switchdev.c b/net/bridge/br_switchdev.c index ee84e783e1df..a3481190d5e6 100644 --- a/net/bridge/br_switchdev.c +++ b/net/bridge/br_switchdev.c @@ -595,6 +595,8 @@ br_switchdev_mdb_replay_one(struct notifier_block *nb, struct net_device *dev, } static int br_switchdev_mdb_queue_one(struct list_head *mdb_list, + struct net_device *dev, + unsigned long action, enum switchdev_obj_id id, const struct net_bridge_mdb_entry *mp, struct net_device *orig_dev) @@ -608,8 +610,17 @@ static int br_switchdev_mdb_queue_one(struct list_head *mdb_list, mdb->obj.id = id; mdb->obj.orig_dev = orig_dev; br_switchdev_mdb_populate(mdb, mp); - list_add_tail(&mdb->obj.list, mdb_list); + if (switchdev_port_obj_is_deferred(dev, action, &mdb->obj)) { + /* This event is already in the deferred queue of + * events, so this replay must be elided, lest the + * driver receives duplicate events for it. + */ + kfree(mdb); + return 0; + } + + list_add_tail(&mdb->obj.list, mdb_list); return 0; } @@ -677,22 +688,26 @@ br_switchdev_mdb_replay(struct net_device *br_dev, struct net_device *dev, if (!br_opt_get(br, BROPT_MULTICAST_ENABLED)) return 0; - /* We cannot walk over br->mdb_list protected just by the rtnl_mutex, - * because the write-side protection is br->multicast_lock. But we - * need to emulate the [ blocking ] calling context of a regular - * switchdev event, so since both br->multicast_lock and RCU read side - * critical sections are atomic, we have no choice but to pick the RCU - * read side lock, queue up all our events, leave the critical section - * and notify switchdev from blocking context. + if (adding) + action = SWITCHDEV_PORT_OBJ_ADD; + else + action = SWITCHDEV_PORT_OBJ_DEL; + + /* br_switchdev_mdb_queue_one will take care to not queue a + * replay of an event that is already pending in the switchdev + * deferred queue. In order to safely determine that, there + * must be no new deferred MDB notifications enqueued for the + * duration of the MDB scan. Therefore, grab the write-side + * lock to avoid racing with any concurrent IGMP/MLD snooping. */ - rcu_read_lock(); + spin_lock_bh(&br->multicast_lock); hlist_for_each_entry_rcu(mp, &br->mdb_list, mdb_node) { struct net_bridge_port_group __rcu * const *pp; const struct net_bridge_port_group *p; if (mp->host_joined) { - err = br_switchdev_mdb_queue_one(&mdb_list, + err = br_switchdev_mdb_queue_one(&mdb_list, dev, action, SWITCHDEV_OBJ_ID_HOST_MDB, mp, br_dev); if (err) { @@ -706,7 +721,7 @@ br_switchdev_mdb_replay(struct net_device *br_dev, struct net_device *dev, if (p->key.port->dev != dev) continue; - err = br_switchdev_mdb_queue_one(&mdb_list, + err = br_switchdev_mdb_queue_one(&mdb_list, dev, action, SWITCHDEV_OBJ_ID_PORT_MDB, mp, dev); if (err) { @@ -716,12 +731,7 @@ br_switchdev_mdb_replay(struct net_device *br_dev, struct net_device *dev, } } - rcu_read_unlock(); - - if (adding) - action = SWITCHDEV_PORT_OBJ_ADD; - else - action = SWITCHDEV_PORT_OBJ_DEL; + spin_unlock_bh(&br->multicast_lock); list_for_each_entry(obj, &mdb_list, list) { err = br_switchdev_mdb_replay_one(nb, dev,