diff mbox series

[net,v2] team: fix possible deadlock in team_port_change_check

Message ID 20240802145935.89292-1-aha310510@gmail.com (mailing list archive)
State Changes Requested
Delegated to: Netdev Maintainers
Headers show
Series [net,v2] team: fix possible deadlock in team_port_change_check | expand

Checks

Context Check Description
netdev/series_format success Single patches do not need cover letters
netdev/tree_selection success Clearly marked for net
netdev/ynl success Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present success Fixes tag present in non-next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 29 this patch: 29
netdev/build_tools success No tools touched, skip
netdev/cc_maintainers success CCed 5 of 5 maintainers
netdev/build_clang success Errors and warnings before: 29 this patch: 29
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success Fixes tag looks correct
netdev/build_allmodconfig_warn success Errors and warnings before: 29 this patch: 29
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 117 lines checked
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Jeongjun Park Aug. 2, 2024, 2:59 p.m. UTC
In do_setlink() , do_set_master() is called when dev->flags does not have
the IFF_UP flag set, so 'team->lock' is acquired and dev_open() is called,
which generates the NETDEV_UP event. This causes a deadlock as it tries to
acquire 'team->lock' again.

To fix this, we need to remove the unnecessary mutex_lock from all paths 
protected by rtnl. This patch also includes patches for paths that are 
already protected by rtnl and do not need the mutex_lock, in addition 
to the root cause reported.

Reported-by: syzbot+b668da2bc4cb9670bf58@syzkaller.appspotmail.com
Fixes: 61dc3461b954 ("team: convert overall spinlock to mutex")
Signed-off-by: Jeongjun Park <aha310510@gmail.com>
---
 drivers/net/team/team_core.c | 22 ----------------------
 1 file changed, 22 deletions(-)

--

Comments

Eric Dumazet Aug. 2, 2024, 3:08 p.m. UTC | #1
On Fri, Aug 2, 2024 at 5:00 PM Jeongjun Park <aha310510@gmail.com> wrote:
>
> In do_setlink() , do_set_master() is called when dev->flags does not have
> the IFF_UP flag set, so 'team->lock' is acquired and dev_open() is called,
> which generates the NETDEV_UP event. This causes a deadlock as it tries to
> acquire 'team->lock' again.
>
> To fix this, we need to remove the unnecessary mutex_lock from all paths
> protected by rtnl. This patch also includes patches for paths that are
> already protected by rtnl and do not need the mutex_lock, in addition
> to the root cause reported.
>
> Reported-by: syzbot+b668da2bc4cb9670bf58@syzkaller.appspotmail.com
> Fixes: 61dc3461b954 ("team: convert overall spinlock to mutex")
> Signed-off-by: Jeongjun Park <aha310510@gmail.com>
> ---

This can not be right.

You have to completely remove team->lock, otherwise paths not taking
RTNL will race with ones holding RTNL.

Add ASSERT_RTNL() at the places we had a  mutex_lock(&team->lock), and
see how it goes.
Jeongjun Park Aug. 2, 2024, 4:25 p.m. UTC | #2
Eric Dumazet wrote:
>
> On Fri, Aug 2, 2024 at 5:00 PM Jeongjun Park <aha310510@gmail.com> wrote:
> >
> > In do_setlink() , do_set_master() is called when dev->flags does not have
> > the IFF_UP flag set, so 'team->lock' is acquired and dev_open() is called,
> > which generates the NETDEV_UP event. This causes a deadlock as it tries to
> > acquire 'team->lock' again.
> >
> > To fix this, we need to remove the unnecessary mutex_lock from all paths
> > protected by rtnl. This patch also includes patches for paths that are
> > already protected by rtnl and do not need the mutex_lock, in addition
> > to the root cause reported.
> >
> > Reported-by: syzbot+b668da2bc4cb9670bf58@syzkaller.appspotmail.com
> > Fixes: 61dc3461b954 ("team: convert overall spinlock to mutex")
> > Signed-off-by: Jeongjun Park <aha310510@gmail.com>
> > ---
>
> This can not be right.
>
> You have to completely remove team->lock, otherwise paths not taking
> RTNL will race with ones holding RTNL.
>
> Add ASSERT_RTNL() at the places we had a  mutex_lock(&team->lock), and
> see how it goes.

Thanks for the advice. I've rewritten the patch to remove all mutex_lock 
as you suggested. I've also added checks for specific paths that don't use
rtnl_lock. What do you think about patching it this way?

Regards,
Jeongjun Park

---
 drivers/net/team/team_core.c | 47 +++++++++---------------------------
 include/linux/if_team.h      |  4 +--
 2 files changed, 12 insertions(+), 39 deletions(-)

diff --git a/drivers/net/team/team_core.c b/drivers/net/team/team_core.c
index ab1935a4aa2c..1e2c631e3b78 100644
--- a/drivers/net/team/team_core.c
+++ b/drivers/net/team/team_core.c
@@ -1621,6 +1621,7 @@ static int team_init(struct net_device *dev)
 	team->dev = dev;
 	team_set_no_mode(team);
 	team->notifier_ctx = false;
+	team->rtnl_locked = false;
 
 	team->pcpu_stats = netdev_alloc_pcpu_stats(struct team_pcpu_stats);
 	if (!team->pcpu_stats)
@@ -1646,10 +1647,6 @@ static int team_init(struct net_device *dev)
 		goto err_options_register;
 	netif_carrier_off(dev);
 
-	lockdep_register_key(&team->team_lock_key);
-	__mutex_init(&team->lock, "team->team_lock_key", &team->team_lock_key);
-	netdev_lockdep_set_classes(dev);
-
 	return 0;
 
 err_options_register:
@@ -1668,7 +1665,6 @@ static void team_uninit(struct net_device *dev)
 	struct team_port *port;
 	struct team_port *tmp;
 
-	mutex_lock(&team->lock);
 	list_for_each_entry_safe(port, tmp, &team->port_list, list)
 		team_port_del(team, port->dev);
 
@@ -1677,9 +1673,7 @@ static void team_uninit(struct net_device *dev)
 	team_mcast_rejoin_fini(team);
 	team_notify_peers_fini(team);
 	team_queue_override_fini(team);
-	mutex_unlock(&team->lock);
 	netdev_change_features(dev);
-	lockdep_unregister_key(&team->team_lock_key);
 }
 
 static void team_destructor(struct net_device *dev)
@@ -1800,11 +1794,9 @@ static int team_set_mac_address(struct net_device *dev, void *p)
 	if (dev->type == ARPHRD_ETHER && !is_valid_ether_addr(addr->sa_data))
 		return -EADDRNOTAVAIL;
 	dev_addr_set(dev, addr->sa_data);
-	mutex_lock(&team->lock);
 	list_for_each_entry(port, &team->port_list, list)
 		if (team->ops.port_change_dev_addr)
 			team->ops.port_change_dev_addr(team, port);
-	mutex_unlock(&team->lock);
 	return 0;
 }
 
@@ -1818,7 +1810,6 @@ static int team_change_mtu(struct net_device *dev, int new_mtu)
 	 * Alhough this is reader, it's guarded by team lock. It's not possible
 	 * to traverse list in reverse under rcu_read_lock
 	 */
-	mutex_lock(&team->lock);
 	team->port_mtu_change_allowed = true;
 	list_for_each_entry(port, &team->port_list, list) {
 		err = dev_set_mtu(port->dev, new_mtu);
@@ -1829,7 +1820,6 @@ static int team_change_mtu(struct net_device *dev, int new_mtu)
 		}
 	}
 	team->port_mtu_change_allowed = false;
-	mutex_unlock(&team->lock);
 
 	WRITE_ONCE(dev->mtu, new_mtu);
 
@@ -1839,7 +1829,6 @@ static int team_change_mtu(struct net_device *dev, int new_mtu)
 	list_for_each_entry_continue_reverse(port, &team->port_list, list)
 		dev_set_mtu(port->dev, dev->mtu);
 	team->port_mtu_change_allowed = false;
-	mutex_unlock(&team->lock);
 
 	return err;
 }
@@ -1893,20 +1882,17 @@ static int team_vlan_rx_add_vid(struct net_device *dev, __be16 proto, u16 vid)
 	 * Alhough this is reader, it's guarded by team lock. It's not possible
 	 * to traverse list in reverse under rcu_read_lock
 	 */
-	mutex_lock(&team->lock);
 	list_for_each_entry(port, &team->port_list, list) {
 		err = vlan_vid_add(port->dev, proto, vid);
 		if (err)
 			goto unwind;
 	}
-	mutex_unlock(&team->lock);
 
 	return 0;
 
 unwind:
 	list_for_each_entry_continue_reverse(port, &team->port_list, list)
 		vlan_vid_del(port->dev, proto, vid);
-	mutex_unlock(&team->lock);
 
 	return err;
 }
@@ -1916,10 +1902,8 @@ static int team_vlan_rx_kill_vid(struct net_device *dev, __be16 proto, u16 vid)
 	struct team *team = netdev_priv(dev);
 	struct team_port *port;
 
-	mutex_lock(&team->lock);
 	list_for_each_entry(port, &team->port_list, list)
 		vlan_vid_del(port->dev, proto, vid);
-	mutex_unlock(&team->lock);
 
 	return 0;
 }
@@ -1941,9 +1925,7 @@ static void team_netpoll_cleanup(struct net_device *dev)
 {
 	struct team *team = netdev_priv(dev);
 
-	mutex_lock(&team->lock);
 	__team_netpoll_cleanup(team);
-	mutex_unlock(&team->lock);
 }
 
 static int team_netpoll_setup(struct net_device *dev,
@@ -1953,7 +1935,6 @@ static int team_netpoll_setup(struct net_device *dev,
 	struct team_port *port;
 	int err = 0;
 
-	mutex_lock(&team->lock);
 	list_for_each_entry(port, &team->port_list, list) {
 		err = __team_port_enable_netpoll(port);
 		if (err) {
@@ -1961,7 +1942,6 @@ static int team_netpoll_setup(struct net_device *dev,
 			break;
 		}
 	}
-	mutex_unlock(&team->lock);
 	return err;
 }
 #endif
@@ -1972,9 +1952,7 @@ static int team_add_slave(struct net_device *dev, struct net_device *port_dev,
 	struct team *team = netdev_priv(dev);
 	int err;
 
-	mutex_lock(&team->lock);
 	err = team_port_add(team, port_dev, extack);
-	mutex_unlock(&team->lock);
 
 	if (!err)
 		netdev_change_features(dev);
@@ -1987,18 +1965,11 @@ static int team_del_slave(struct net_device *dev, struct net_device *port_dev)
 	struct team *team = netdev_priv(dev);
 	int err;
 
-	mutex_lock(&team->lock);
 	err = team_port_del(team, port_dev);
-	mutex_unlock(&team->lock);
 
 	if (err)
 		return err;
 
-	if (netif_is_team_master(port_dev)) {
-		lockdep_unregister_key(&team->team_lock_key);
-		lockdep_register_key(&team->team_lock_key);
-		lockdep_set_class(&team->lock, &team->team_lock_key);
-	}
 	netdev_change_features(dev);
 
 	return err;
@@ -2305,13 +2276,11 @@ static struct team *team_nl_team_get(struct genl_info *info)
 	}
 
 	team = netdev_priv(dev);
-	mutex_lock(&team->lock);
 	return team;
 }
 
 static void team_nl_team_put(struct team *team)
 {
-	mutex_unlock(&team->lock);
 	dev_put(team->dev);
 }
 
@@ -2501,6 +2470,11 @@ int team_nl_options_get_doit(struct sk_buff *skb, struct genl_info *info)
 	int err;
 	LIST_HEAD(sel_opt_inst_list);
 
+	if (!rtnl_is_locked()) {
+		rtnl_lock();
+		team->rtnl_locked = true;
+	}
+
 	team = team_nl_team_get(info);
 	if (!team)
 		return -EINVAL;
@@ -2513,6 +2487,11 @@ int team_nl_options_get_doit(struct sk_buff *skb, struct genl_info *info)
 
 	team_nl_team_put(team);
 
+	if (team->rtnl_locked) {
+		team->rtnl_locked = false;
+		rtnl_unlock();
+	}
+
 	return err;
 }
 
@@ -2945,11 +2924,7 @@ static void __team_port_change_port_removed(struct team_port *port)
 
 static void team_port_change_check(struct team_port *port, bool linkup)
 {
-	struct team *team = port->team;
-
-	mutex_lock(&team->lock);
 	__team_port_change_check(port, linkup);
-	mutex_unlock(&team->lock);
 }
 
 
diff --git a/include/linux/if_team.h b/include/linux/if_team.h
index cdc684e04a2f..930c5deb4ad1 100644
--- a/include/linux/if_team.h
+++ b/include/linux/if_team.h
@@ -191,8 +191,6 @@ struct team {
 
 	const struct header_ops *header_ops_cache;
 
-	struct mutex lock; /* used for overall locking, e.g. port lists write */
-
 	/*
 	 * List of enabled ports and their count
 	 */
@@ -223,8 +221,8 @@ struct team {
 		atomic_t count_pending;
 		struct delayed_work dw;
 	} mcast_rejoin;
-	struct lock_class_key team_lock_key;
 	long mode_priv[TEAM_MODE_PRIV_LONGS];
+	bool rtnl_locked;
 };
 
 static inline int team_dev_queue_xmit(struct team *team, struct team_port *port,
--
Jiri Pirko Aug. 2, 2024, 5:42 p.m. UTC | #3
Fri, Aug 02, 2024 at 06:25:31PM CEST, aha310510@gmail.com wrote:
>Eric Dumazet wrote:
>>
>> On Fri, Aug 2, 2024 at 5:00 PM Jeongjun Park <aha310510@gmail.com> wrote:
>> >

[..]

>@@ -2501,6 +2470,11 @@ int team_nl_options_get_doit(struct sk_buff *skb, struct genl_info *info)
> 	int err;
> 	LIST_HEAD(sel_opt_inst_list);
> 
>+	if (!rtnl_is_locked()) {

This is completely wrong, other thread may hold the lock.


>+		rtnl_lock();

NACK! I wrote it in the other thread. Don't take rtnl for get options
command. It is used for repeated fetch of stats. It's read only. Should
be converted to RCU.

Why are you so obsessed by this hypothetical syzcaller bug? Are you
hitting this in real? If not, please let it go. I will fix it myself
when I find some spare cycles.




>+		team->rtnl_locked = true;
>+	}
>+
> 	team = team_nl_team_get(info);
> 	if (!team)
> 		return -EINVAL;
>@@ -2513,6 +2487,11 @@ int team_nl_options_get_doit(struct sk_buff *skb, struct genl_info *info)
> 
> 	team_nl_team_put(team);
> 
>+	if (team->rtnl_locked) {
>+		team->rtnl_locked = false;
>+		rtnl_unlock();
>+	}
>+
> 	return err;
> }
> 

[..]
Jeongjun Park Aug. 3, 2024, 1:36 a.m. UTC | #4
> Jiri Pirko wrote:
> 
> Fri, Aug 02, 2024 at 06:25:31PM CEST, aha310510@gmail.com wrote:
>> Eric Dumazet wrote:
>>> 
>>>> On Fri, Aug 2, 2024 at 5:00 PM Jeongjun Park <aha310510@gmail.com> wrote:
>>>>> 
>> 
>> [..]
>> 
>> @@ -2501,6 +2470,11 @@ int team_nl_options_get_doit(struct sk_buff *skb, struct genl_info *info)
>>    int err;
>>    LIST_HEAD(sel_opt_inst_list);
>> 
>> +    if (!rtnl_is_locked()) {
> 
> This is completely wrong, other thread may hold the lock.
> 
> 
>> +        rtnl_lock();
> 
> NACK! I wrote it in the other thread. Don't take rtnl for get options
> command. It is used for repeated fetch of stats. It's read only. Should
> be converted to RCU.
> 

I see. But, in the current, when called through the following path:
team_nl_send_event_options_get()->
team_nl_send_options_get()->
team_nl_fill_one_option_get()
, it was protected through rtnl. Does this mean that rcu should be 
used instead of rtnl in this case as well?

> Why are you so obsessed by this hypothetical syzcaller bug? Are you
> hitting this in real? If not, please let it go. I will fix it myself
> when I find some spare cycles.

Sorry for the inconvenience, but I don't want to give up on this bug 
so easily since it is a valid bug that we have started analyzing 
anyway and the direction of how to fix it is clear. I hope you 
understand and I will send you a patch that uses rcu instead 
of rtnl soon.

Regards,
Jeongjun Park
Jiri Pirko Aug. 3, 2024, 6:58 a.m. UTC | #5
Sat, Aug 03, 2024 at 03:36:48AM CEST, aha310510@gmail.com wrote:
>
>
>> Jiri Pirko wrote:
>> 
>> Fri, Aug 02, 2024 at 06:25:31PM CEST, aha310510@gmail.com wrote:
>>> Eric Dumazet wrote:
>>>> 
>>>>> On Fri, Aug 2, 2024 at 5:00 PM Jeongjun Park <aha310510@gmail.com> wrote:
>>>>>> 
>>> 
>>> [..]
>>> 
>>> @@ -2501,6 +2470,11 @@ int team_nl_options_get_doit(struct sk_buff *skb, struct genl_info *info)
>>>    int err;
>>>    LIST_HEAD(sel_opt_inst_list);
>>> 
>>> +    if (!rtnl_is_locked()) {
>> 
>> This is completely wrong, other thread may hold the lock.
>> 
>> 
>>> +        rtnl_lock();
>> 
>> NACK! I wrote it in the other thread. Don't take rtnl for get options
>> command. It is used for repeated fetch of stats. It's read only. Should
>> be converted to RCU.
>> 
>
>I see. But, in the current, when called through the following path:
>team_nl_send_event_options_get()->
>team_nl_send_options_get()->
>team_nl_fill_one_option_get()
>, it was protected through rtnl. Does this mean that rcu should be 

Not true. team_nl_send_event_options_get() is sometimes called without
rtnl (from lb_stats_refresh() for example).


>used instead of rtnl in this case as well?
>
>> Why are you so obsessed by this hypothetical syzcaller bug? Are you
>> hitting this in real? If not, please let it go. I will fix it myself
>> when I find some spare cycles.
>
>Sorry for the inconvenience, but I don't want to give up on this bug 
>so easily since it is a valid bug that we have started analyzing 
>anyway and the direction of how to fix it is clear. I hope you 
>understand and I will send you a patch that uses rcu instead 

I don't understand, sorry. Apparently you don't have any clue what you
are doing and only waste people time. Can't you find another toy to
play?

Please add my
Nacked-by: Jiri Pirko <jiri@nvidia.com>
tag to your any future attempt, as I'm sure it won't be correct.

Thanks!



>of rtnl soon.
>
>Regards,
>Jeongjun Park
Jeongjun Park Aug. 3, 2024, 11:51 a.m. UTC | #6
Jiri Pirko wrote:
>
> Sat, Aug 03, 2024 at 03:36:48AM CEST, aha310510@gmail.com wrote:
> >
> >
> >> Jiri Pirko wrote:
> >>
> >> Fri, Aug 02, 2024 at 06:25:31PM CEST, aha310510@gmail.com wrote:
> >>> Eric Dumazet wrote:
> >>>>
> >>>>> On Fri, Aug 2, 2024 at 5:00 PM Jeongjun Park <aha310510@gmail.com> wrote:
> >>>>>>
> >>>
> >>> [..]
> >>>
> >>> @@ -2501,6 +2470,11 @@ int team_nl_options_get_doit(struct sk_buff *skb, struct genl_info *info)
> >>>    int err;
> >>>    LIST_HEAD(sel_opt_inst_list);
> >>>
> >>> +    if (!rtnl_is_locked()) {
> >>
> >> This is completely wrong, other thread may hold the lock.
> >>
> >>
> >>> +        rtnl_lock();
> >>
> >> NACK! I wrote it in the other thread. Don't take rtnl for get options
> >> command. It is used for repeated fetch of stats. It's read only. Should
> >> be converted to RCU.
> >>
> >
> >I see. But, in the current, when called through the following path:
> >team_nl_send_event_options_get()->
> >team_nl_send_options_get()->
> >team_nl_fill_one_option_get()
> >, it was protected through rtnl. Does this mean that rcu should be
>
> Not true. team_nl_send_event_options_get() is sometimes called without
> rtnl (from lb_stats_refresh() for example).
>
>
> >used instead of rtnl in this case as well?
> >
> >> Why are you so obsessed by this hypothetical syzcaller bug? Are you
> >> hitting this in real? If not, please let it go. I will fix it myself
> >> when I find some spare cycles.
> >
> >Sorry for the inconvenience, but I don't want to give up on this bug
> >so easily since it is a valid bug that we have started analyzing
> >anyway and the direction of how to fix it is clear. I hope you
> >understand and I will send you a patch that uses rcu instead
>
> I don't understand, sorry. Apparently you don't have any clue what you
> are doing and only waste people time. Can't you find another toy to
> play?
>
> Please add my
> Nacked-by: Jiri Pirko <jiri@nvidia.com>
> tag to your any future attempt, as I'm sure it won't be correct.
>

I am truly sorry. I feel like I wasted people time by sending meaningless
patches repeatedly, approaching this issue recklessly. I will be more
careful in my analysis and sending emails in the future.

Regards,
Jeongjun Park
diff mbox series

Patch

diff --git a/drivers/net/team/team_core.c b/drivers/net/team/team_core.c
index ab1935a4aa2c..f7bab2d2a281 100644
--- a/drivers/net/team/team_core.c
+++ b/drivers/net/team/team_core.c
@@ -1668,7 +1668,6 @@  static void team_uninit(struct net_device *dev)
 	struct team_port *port;
 	struct team_port *tmp;
 
-	mutex_lock(&team->lock);
 	list_for_each_entry_safe(port, tmp, &team->port_list, list)
 		team_port_del(team, port->dev);
 
@@ -1677,7 +1676,6 @@  static void team_uninit(struct net_device *dev)
 	team_mcast_rejoin_fini(team);
 	team_notify_peers_fini(team);
 	team_queue_override_fini(team);
-	mutex_unlock(&team->lock);
 	netdev_change_features(dev);
 	lockdep_unregister_key(&team->team_lock_key);
 }
@@ -1818,7 +1816,6 @@  static int team_change_mtu(struct net_device *dev, int new_mtu)
 	 * Alhough this is reader, it's guarded by team lock. It's not possible
 	 * to traverse list in reverse under rcu_read_lock
 	 */
-	mutex_lock(&team->lock);
 	team->port_mtu_change_allowed = true;
 	list_for_each_entry(port, &team->port_list, list) {
 		err = dev_set_mtu(port->dev, new_mtu);
@@ -1829,7 +1826,6 @@  static int team_change_mtu(struct net_device *dev, int new_mtu)
 		}
 	}
 	team->port_mtu_change_allowed = false;
-	mutex_unlock(&team->lock);
 
 	WRITE_ONCE(dev->mtu, new_mtu);
 
@@ -1839,7 +1835,6 @@  static int team_change_mtu(struct net_device *dev, int new_mtu)
 	list_for_each_entry_continue_reverse(port, &team->port_list, list)
 		dev_set_mtu(port->dev, dev->mtu);
 	team->port_mtu_change_allowed = false;
-	mutex_unlock(&team->lock);
 
 	return err;
 }
@@ -1893,20 +1888,17 @@  static int team_vlan_rx_add_vid(struct net_device *dev, __be16 proto, u16 vid)
 	 * Alhough this is reader, it's guarded by team lock. It's not possible
 	 * to traverse list in reverse under rcu_read_lock
 	 */
-	mutex_lock(&team->lock);
 	list_for_each_entry(port, &team->port_list, list) {
 		err = vlan_vid_add(port->dev, proto, vid);
 		if (err)
 			goto unwind;
 	}
-	mutex_unlock(&team->lock);
 
 	return 0;
 
 unwind:
 	list_for_each_entry_continue_reverse(port, &team->port_list, list)
 		vlan_vid_del(port->dev, proto, vid);
-	mutex_unlock(&team->lock);
 
 	return err;
 }
@@ -1916,10 +1908,8 @@  static int team_vlan_rx_kill_vid(struct net_device *dev, __be16 proto, u16 vid)
 	struct team *team = netdev_priv(dev);
 	struct team_port *port;
 
-	mutex_lock(&team->lock);
 	list_for_each_entry(port, &team->port_list, list)
 		vlan_vid_del(port->dev, proto, vid);
-	mutex_unlock(&team->lock);
 
 	return 0;
 }
@@ -1941,9 +1931,7 @@  static void team_netpoll_cleanup(struct net_device *dev)
 {
 	struct team *team = netdev_priv(dev);
 
-	mutex_lock(&team->lock);
 	__team_netpoll_cleanup(team);
-	mutex_unlock(&team->lock);
 }
 
 static int team_netpoll_setup(struct net_device *dev,
@@ -1953,7 +1941,6 @@  static int team_netpoll_setup(struct net_device *dev,
 	struct team_port *port;
 	int err = 0;
 
-	mutex_lock(&team->lock);
 	list_for_each_entry(port, &team->port_list, list) {
 		err = __team_port_enable_netpoll(port);
 		if (err) {
@@ -1961,7 +1948,6 @@  static int team_netpoll_setup(struct net_device *dev,
 			break;
 		}
 	}
-	mutex_unlock(&team->lock);
 	return err;
 }
 #endif
@@ -1972,9 +1958,7 @@  static int team_add_slave(struct net_device *dev, struct net_device *port_dev,
 	struct team *team = netdev_priv(dev);
 	int err;
 
-	mutex_lock(&team->lock);
 	err = team_port_add(team, port_dev, extack);
-	mutex_unlock(&team->lock);
 
 	if (!err)
 		netdev_change_features(dev);
@@ -1987,9 +1971,7 @@  static int team_del_slave(struct net_device *dev, struct net_device *port_dev)
 	struct team *team = netdev_priv(dev);
 	int err;
 
-	mutex_lock(&team->lock);
 	err = team_port_del(team, port_dev);
-	mutex_unlock(&team->lock);
 
 	if (err)
 		return err;
@@ -2945,11 +2927,7 @@  static void __team_port_change_port_removed(struct team_port *port)
 
 static void team_port_change_check(struct team_port *port, bool linkup)
 {
-	struct team *team = port->team;
-
-	mutex_lock(&team->lock);
 	__team_port_change_check(port, linkup);
-	mutex_unlock(&team->lock);
 }